Jan 21 13:02:18 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 13:02:18 crc restorecon[4596]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:02:18 crc restorecon[4596]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 13:02:18 crc restorecon[4596]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 13:02:19 crc kubenswrapper[4765]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 13:02:19 crc kubenswrapper[4765]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 13:02:19 crc kubenswrapper[4765]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 13:02:19 crc kubenswrapper[4765]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 13:02:19 crc kubenswrapper[4765]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 13:02:19 crc kubenswrapper[4765]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.108237 4765 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112621 4765 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112713 4765 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112727 4765 feature_gate.go:330] unrecognized feature gate: Example Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112734 4765 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112741 4765 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112749 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112755 4765 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112763 4765 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112770 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112781 4765 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112791 4765 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112801 4765 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112808 4765 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112814 4765 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112828 4765 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112833 4765 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112839 4765 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112844 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112850 4765 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112855 4765 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112865 4765 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112872 4765 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112878 4765 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112883 4765 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112888 4765 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112893 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112899 4765 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112909 4765 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112916 4765 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112926 4765 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112935 4765 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112944 4765 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.112951 4765 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113033 4765 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113039 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113045 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113050 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113056 4765 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113061 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113070 4765 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113076 4765 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113081 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113087 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113092 4765 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113098 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113104 4765 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113109 4765 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113114 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113120 4765 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113125 4765 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113133 4765 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113142 4765 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113148 4765 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113154 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113159 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113166 4765 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113172 4765 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113177 4765 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113183 4765 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113188 4765 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113195 4765 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113227 4765 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113241 4765 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113246 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113251 4765 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113256 4765 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113261 4765 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113271 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113276 4765 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113281 4765 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.113285 4765 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.113829 4765 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114156 4765 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114177 4765 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114189 4765 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114223 4765 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114234 4765 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114246 4765 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114257 4765 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114265 4765 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114271 4765 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114280 4765 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114288 4765 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114297 4765 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114305 4765 flags.go:64] FLAG: --cgroup-root="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114315 4765 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114323 4765 flags.go:64] FLAG: --client-ca-file="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114331 4765 flags.go:64] FLAG: --cloud-config="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114339 4765 flags.go:64] FLAG: --cloud-provider="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114347 4765 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114364 4765 flags.go:64] FLAG: --cluster-domain="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114372 4765 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114380 4765 flags.go:64] FLAG: --config-dir="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114387 4765 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114397 4765 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114409 4765 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114417 4765 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114425 4765 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114434 4765 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114443 4765 flags.go:64] FLAG: --contention-profiling="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114451 4765 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114459 4765 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114468 4765 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114477 4765 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114488 4765 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114497 4765 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114507 4765 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114514 4765 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114524 4765 flags.go:64] FLAG: --enable-server="true" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114531 4765 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114544 4765 flags.go:64] FLAG: --event-burst="100" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114552 4765 flags.go:64] FLAG: --event-qps="50" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114559 4765 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114567 4765 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114575 4765 flags.go:64] FLAG: --eviction-hard="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114585 4765 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114592 4765 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114600 4765 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114608 4765 flags.go:64] FLAG: --eviction-soft="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114616 4765 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114623 4765 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114630 4765 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114638 4765 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114645 4765 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114653 4765 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114660 4765 flags.go:64] FLAG: --feature-gates="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114671 4765 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114682 4765 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114690 4765 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114697 4765 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114704 4765 flags.go:64] FLAG: --healthz-port="10248" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114712 4765 flags.go:64] FLAG: --help="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114720 4765 flags.go:64] FLAG: --hostname-override="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114727 4765 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114736 4765 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114743 4765 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114751 4765 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114759 4765 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114766 4765 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114774 4765 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114781 4765 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114789 4765 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114796 4765 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114804 4765 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114812 4765 flags.go:64] FLAG: --kube-reserved="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114820 4765 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114827 4765 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114834 4765 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114841 4765 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114848 4765 flags.go:64] FLAG: --lock-file="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114856 4765 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114878 4765 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114886 4765 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114899 4765 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114909 4765 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114918 4765 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114926 4765 flags.go:64] FLAG: --logging-format="text" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114933 4765 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114942 4765 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114951 4765 flags.go:64] FLAG: --manifest-url="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114959 4765 flags.go:64] FLAG: --manifest-url-header="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114973 4765 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114982 4765 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.114992 4765 flags.go:64] FLAG: --max-pods="110" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115000 4765 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115008 4765 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115014 4765 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115022 4765 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115030 4765 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115037 4765 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115045 4765 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115071 4765 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115079 4765 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115086 4765 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115093 4765 flags.go:64] FLAG: --pod-cidr="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115100 4765 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115113 4765 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115120 4765 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115127 4765 flags.go:64] FLAG: --pods-per-core="0" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115135 4765 flags.go:64] FLAG: --port="10250" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115142 4765 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115149 4765 flags.go:64] FLAG: --provider-id="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115156 4765 flags.go:64] FLAG: --qos-reserved="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115166 4765 flags.go:64] FLAG: --read-only-port="10255" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115173 4765 flags.go:64] FLAG: --register-node="true" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115180 4765 flags.go:64] FLAG: --register-schedulable="true" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115187 4765 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115233 4765 flags.go:64] FLAG: --registry-burst="10" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115242 4765 flags.go:64] FLAG: --registry-qps="5" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115249 4765 flags.go:64] FLAG: --reserved-cpus="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115259 4765 flags.go:64] FLAG: --reserved-memory="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115269 4765 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115277 4765 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115285 4765 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115292 4765 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115300 4765 flags.go:64] FLAG: --runonce="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115308 4765 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115316 4765 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115324 4765 flags.go:64] FLAG: --seccomp-default="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115331 4765 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115338 4765 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115346 4765 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115353 4765 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115361 4765 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115369 4765 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115376 4765 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115384 4765 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115391 4765 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115399 4765 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115407 4765 flags.go:64] FLAG: --system-cgroups="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115413 4765 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115427 4765 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115435 4765 flags.go:64] FLAG: --tls-cert-file="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115442 4765 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115453 4765 flags.go:64] FLAG: --tls-min-version="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115460 4765 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115468 4765 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115475 4765 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115482 4765 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115490 4765 flags.go:64] FLAG: --v="2" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115501 4765 flags.go:64] FLAG: --version="false" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115511 4765 flags.go:64] FLAG: --vmodule="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115520 4765 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.115529 4765 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115766 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115778 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115787 4765 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115797 4765 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115805 4765 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115813 4765 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115820 4765 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115868 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115875 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115882 4765 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115888 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115897 4765 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115905 4765 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115913 4765 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115921 4765 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115929 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115936 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115944 4765 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115951 4765 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115959 4765 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115968 4765 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115976 4765 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115983 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115989 4765 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.115996 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116002 4765 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116008 4765 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116015 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116020 4765 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116027 4765 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116035 4765 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116042 4765 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116048 4765 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116055 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116061 4765 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116068 4765 feature_gate.go:330] unrecognized feature gate: Example Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116074 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116080 4765 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116088 4765 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116094 4765 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116101 4765 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116107 4765 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116113 4765 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116119 4765 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116126 4765 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116133 4765 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116141 4765 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116148 4765 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116155 4765 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116161 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116168 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116175 4765 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116181 4765 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116188 4765 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116193 4765 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116199 4765 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116206 4765 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116236 4765 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116243 4765 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116249 4765 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116255 4765 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116261 4765 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116268 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116275 4765 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116281 4765 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116288 4765 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116295 4765 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116301 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116308 4765 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116314 4765 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.116321 4765 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.116332 4765 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.128431 4765 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.128506 4765 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128707 4765 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128746 4765 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128760 4765 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128772 4765 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128786 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128797 4765 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128810 4765 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128821 4765 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128834 4765 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128848 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128859 4765 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128870 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128883 4765 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128894 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128907 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128920 4765 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128935 4765 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128949 4765 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128963 4765 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128977 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.128989 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129001 4765 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129011 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129021 4765 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129031 4765 feature_gate.go:330] unrecognized feature gate: Example Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129041 4765 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129051 4765 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129061 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129070 4765 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129079 4765 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129090 4765 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129104 4765 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129115 4765 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129125 4765 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129139 4765 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129149 4765 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129158 4765 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129167 4765 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129177 4765 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129186 4765 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129195 4765 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129240 4765 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129254 4765 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129266 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129277 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129286 4765 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129296 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129306 4765 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129316 4765 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129326 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129336 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129346 4765 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129355 4765 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129365 4765 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129377 4765 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129387 4765 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129397 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129406 4765 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129416 4765 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129426 4765 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129435 4765 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129447 4765 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129460 4765 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129471 4765 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129483 4765 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129495 4765 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129505 4765 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129515 4765 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129525 4765 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129534 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129548 4765 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.129565 4765 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129860 4765 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129879 4765 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129891 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129903 4765 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129919 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129930 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129940 4765 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129950 4765 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129961 4765 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129971 4765 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129982 4765 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.129992 4765 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130001 4765 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130012 4765 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130023 4765 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130034 4765 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130046 4765 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130057 4765 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130070 4765 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130085 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130099 4765 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130112 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130124 4765 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130134 4765 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130145 4765 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130155 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130165 4765 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130175 4765 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130185 4765 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130194 4765 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130205 4765 feature_gate.go:330] unrecognized feature gate: Example Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130249 4765 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130259 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130270 4765 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130283 4765 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130296 4765 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130309 4765 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130320 4765 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130331 4765 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130341 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130351 4765 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130364 4765 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130375 4765 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130385 4765 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130396 4765 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130407 4765 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130418 4765 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130429 4765 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130438 4765 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130447 4765 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130457 4765 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130467 4765 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130477 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130487 4765 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130497 4765 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130507 4765 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130519 4765 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130530 4765 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130541 4765 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130552 4765 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130562 4765 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130572 4765 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130583 4765 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130593 4765 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130604 4765 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130615 4765 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130625 4765 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130635 4765 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130645 4765 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130654 4765 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.130667 4765 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.130683 4765 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.131046 4765 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.136277 4765 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.136456 4765 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.137499 4765 server.go:997] "Starting client certificate rotation" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.137564 4765 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.137805 4765 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-26 02:17:20.368014557 +0000 UTC Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.137919 4765 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.144343 4765 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.147623 4765 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.147899 4765 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.144:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.158951 4765 log.go:25] "Validated CRI v1 runtime API" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.180972 4765 log.go:25] "Validated CRI v1 image API" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.183160 4765 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.185769 4765 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-12-56-41-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.185802 4765 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.196353 4765 manager.go:217] Machine: {Timestamp:2026-01-21 13:02:19.195303981 +0000 UTC m=+0.213029823 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:66943250-b7ae-4c71-9b94-062a3ddaf203 BootID:6701690f-553a-4c5a-9946-a680675a0350 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:35:30:6c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:35:30:6c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d4:19:3d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2d:70:19 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7c:b6:4a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:81:24:36 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0e:c7:51:f4:e4:ac Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:e6:be:f3:e5:62:d5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.196584 4765 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.196711 4765 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.197029 4765 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.197251 4765 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.197291 4765 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.197540 4765 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.197555 4765 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.197787 4765 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.197820 4765 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.198002 4765 state_mem.go:36] "Initialized new in-memory state store" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.198198 4765 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.202382 4765 kubelet.go:418] "Attempting to sync node with API server" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.202428 4765 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.202463 4765 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.202481 4765 kubelet.go:324] "Adding apiserver pod source" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.202503 4765 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.204958 4765 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.205530 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.144:6443: connect: connection refused Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.205634 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.144:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.205894 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.144:6443: connect: connection refused Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.205959 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.144:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.206098 4765 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.207023 4765 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.207766 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.207860 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.207971 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.208033 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.208096 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.208164 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.208255 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.208348 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.208416 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.208483 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.208547 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.208605 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.208981 4765 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.210326 4765 server.go:1280] "Started kubelet" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.210415 4765 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.144:6443: connect: connection refused Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.211264 4765 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.211412 4765 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.211882 4765 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.212492 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.212543 4765 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 13:02:19 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.221054 4765 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.221091 4765 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.221307 4765 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.222502 4765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.144:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cc09e2b1a7814 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 13:02:19.210283028 +0000 UTC m=+0.228008850,LastTimestamp:2026-01-21 13:02:19.210283028 +0000 UTC m=+0.228008850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.229788 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:36:42.703412637 +0000 UTC Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.231844 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.144:6443: connect: connection refused Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.232054 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.144:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.232932 4765 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.233924 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" interval="200ms" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.236451 4765 factory.go:55] Registering systemd factory Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.236551 4765 factory.go:221] Registration of the systemd container factory successfully Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.236926 4765 factory.go:153] Registering CRI-O factory Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.237000 4765 factory.go:221] Registration of the crio container factory successfully Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.237168 4765 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.237290 4765 factory.go:103] Registering Raw factory Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.237390 4765 manager.go:1196] Started watching for new ooms in manager Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.237679 4765 server.go:460] "Adding debug handlers to kubelet server" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.239543 4765 manager.go:319] Starting recovery of all containers Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.246956 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.247128 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.247267 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.247355 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.247553 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.247638 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.247715 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.247794 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.247876 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.247988 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.248094 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.248174 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.248288 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.248370 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.248449 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.248546 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.248628 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.248713 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.248801 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.248885 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.248977 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.249097 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.249176 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.249275 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.249359 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.249445 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.249548 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.249640 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.249722 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.249800 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.249910 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.250011 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.250105 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.250183 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.250287 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.250380 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.250473 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.250556 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.250635 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.250740 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.250824 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.250910 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.250988 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.251073 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.251156 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.251271 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.251353 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.251493 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.251631 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.251733 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.251822 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.251904 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.252073 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.252166 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.252292 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.252419 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.252534 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.252647 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.252738 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.252830 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.252920 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.253004 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.253086 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.253172 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.253271 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.253352 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.253457 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.253535 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.253624 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.255453 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.255580 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.255694 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.255798 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.255912 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.256096 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.256205 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.256372 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.256492 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.256594 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.256685 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.256821 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.256908 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.257016 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.257097 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.257182 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.257337 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.257456 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.257584 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.257734 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.257830 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.257936 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.258022 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.258128 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.258303 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.258435 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.258558 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.258684 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.258764 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.258873 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.258954 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.259043 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.259145 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.259260 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.259367 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.259573 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.259684 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.259789 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.259871 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.260002 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.260090 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.260170 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.260284 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.260370 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.260453 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.260582 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.260661 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.260746 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.260824 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.260901 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.260976 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.261090 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.261240 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.261349 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.261427 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.261583 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.261668 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.261781 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.261869 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.261955 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.262067 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.262179 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.262318 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.262637 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.262726 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.262805 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.262937 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.263044 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.263131 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.263257 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.263339 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.263416 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.263495 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.263576 4765 manager.go:324] Recovery completed Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.263584 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264177 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264234 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264252 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264267 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264282 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264327 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264344 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264359 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264394 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264409 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264423 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264437 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264481 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264497 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264513 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264548 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264564 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264585 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264600 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264683 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.264700 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265603 4765 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265655 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265672 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265686 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265718 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265734 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265747 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265760 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265772 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265810 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265823 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265838 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265852 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265888 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265901 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265916 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265930 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265968 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265983 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.265997 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266010 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266045 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266061 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266074 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266087 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266122 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266135 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266150 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266295 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266316 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266371 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266387 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266402 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266415 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266459 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266475 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266491 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266504 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266537 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266552 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266651 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266666 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266702 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266722 4765 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266734 4765 reconstruct.go:97] "Volume reconstruction finished" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.266743 4765 reconciler.go:26] "Reconciler: start to sync state" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.272621 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.274362 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.274425 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.274440 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.276954 4765 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.276972 4765 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.276999 4765 state_mem.go:36] "Initialized new in-memory state store" Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.333645 4765 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.434533 4765 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.437434 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" interval="400ms" Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.535416 4765 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.577242 4765 policy_none.go:49] "None policy: Start" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.585241 4765 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.585284 4765 state_mem.go:35] "Initializing new in-memory state store" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.607731 4765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.612084 4765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.612352 4765 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.612420 4765 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.612493 4765 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 13:02:19 crc kubenswrapper[4765]: W0121 13:02:19.613324 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.144:6443: connect: connection refused Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.613403 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.144:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.636003 4765 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.660067 4765 manager.go:334] "Starting Device Plugin manager" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.660155 4765 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.660172 4765 server.go:79] "Starting device plugin registration server" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.660599 4765 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.660615 4765 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.660891 4765 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.660972 4765 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.660980 4765 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.669739 4765 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.713678 4765 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.713888 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.715282 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.715338 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.715352 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.715563 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.715841 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.716091 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.717432 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.717467 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.717479 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.717620 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.718266 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.718293 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.718755 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.718780 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.718790 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.718935 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.718978 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.718993 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.719001 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.719007 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.718942 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.719038 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.719041 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.719102 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.719681 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.719716 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.719733 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.719896 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.719935 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.719954 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.719962 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.720136 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.720180 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.720644 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.720667 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.720680 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.720854 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.720880 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.720891 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.720894 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.720927 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.721599 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.721643 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.721657 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.761166 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.762693 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.762740 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.762754 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.762786 4765 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.763312 4765 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.144:6443: connect: connection refused" node="crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.772796 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.772842 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.772890 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.772917 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.772955 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.772994 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.773025 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.773048 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.773070 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.773093 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.773124 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.773155 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.773177 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.773363 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.773390 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.839243 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" interval="800ms" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874559 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874639 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874662 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874682 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874698 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874732 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874757 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874774 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874775 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874828 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874838 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874842 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874792 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874893 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874960 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874989 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.875003 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.875022 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.874985 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.875009 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.875061 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.875101 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.875132 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.875138 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.875222 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.875240 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.875271 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.875357 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.875400 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.875505 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.963563 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.965028 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.965080 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.965094 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:19 crc kubenswrapper[4765]: I0121 13:02:19.965125 4765 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 13:02:19 crc kubenswrapper[4765]: E0121 13:02:19.965709 4765 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.144:6443: connect: connection refused" node="crc" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.054448 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.073709 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.082413 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 13:02:20 crc kubenswrapper[4765]: W0121 13:02:20.090683 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-833305bf6dab82b81fff92f04c5297f73481eae2bf5dc6b295ce5968ba44765f WatchSource:0}: Error finding container 833305bf6dab82b81fff92f04c5297f73481eae2bf5dc6b295ce5968ba44765f: Status 404 returned error can't find the container with id 833305bf6dab82b81fff92f04c5297f73481eae2bf5dc6b295ce5968ba44765f Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.104253 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 13:02:20 crc kubenswrapper[4765]: W0121 13:02:20.106964 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-e5e615d8e6f1f9c90026a5c2d6a451b69b98f0d6d64cea268c7872cf5683b8ff WatchSource:0}: Error finding container e5e615d8e6f1f9c90026a5c2d6a451b69b98f0d6d64cea268c7872cf5683b8ff: Status 404 returned error can't find the container with id e5e615d8e6f1f9c90026a5c2d6a451b69b98f0d6d64cea268c7872cf5683b8ff Jan 21 13:02:20 crc kubenswrapper[4765]: W0121 13:02:20.109760 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.144:6443: connect: connection refused Jan 21 13:02:20 crc kubenswrapper[4765]: E0121 13:02:20.109833 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.144:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.113088 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:20 crc kubenswrapper[4765]: W0121 13:02:20.137452 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-2671bc70fbb17548fc61ebc30f86ec1a26967964ab667dce9174cbe8abca97b3 WatchSource:0}: Error finding container 2671bc70fbb17548fc61ebc30f86ec1a26967964ab667dce9174cbe8abca97b3: Status 404 returned error can't find the container with id 2671bc70fbb17548fc61ebc30f86ec1a26967964ab667dce9174cbe8abca97b3 Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.211860 4765 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.144:6443: connect: connection refused Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.232286 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:41:20.897005475 +0000 UTC Jan 21 13:02:20 crc kubenswrapper[4765]: W0121 13:02:20.272258 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.144:6443: connect: connection refused Jan 21 13:02:20 crc kubenswrapper[4765]: E0121 13:02:20.272378 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.144:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.365816 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.367923 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.367976 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.367988 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.368020 4765 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 13:02:20 crc kubenswrapper[4765]: E0121 13:02:20.368650 4765 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.144:6443: connect: connection refused" node="crc" Jan 21 13:02:20 crc kubenswrapper[4765]: W0121 13:02:20.552622 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.144:6443: connect: connection refused Jan 21 13:02:20 crc kubenswrapper[4765]: E0121 13:02:20.552741 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.144:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.620566 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91"} Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.620701 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8430d6de79ab33d4d5e255c20079ed38efe7fdca02ef67984d5e3b2ebf80b18a"} Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.622494 4765 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411" exitCode=0 Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.622577 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411"} Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.622601 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"833305bf6dab82b81fff92f04c5297f73481eae2bf5dc6b295ce5968ba44765f"} Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.622706 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.623808 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.623837 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.623846 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.625172 4765 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c" exitCode=0 Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.625246 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c"} Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.625267 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3d0650a7518f25d7d89cf04cde5462e9ec4cf1220a9fc413cb773e791b76faa3"} Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.625689 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.626529 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.626550 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.626557 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.627189 4765 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f090b21d8b8f1d0757341b4442f9594170ed8c7792c06df6b06f7f9d93669bcc" exitCode=0 Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.627243 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f090b21d8b8f1d0757341b4442f9594170ed8c7792c06df6b06f7f9d93669bcc"} Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.627259 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2671bc70fbb17548fc61ebc30f86ec1a26967964ab667dce9174cbe8abca97b3"} Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.627321 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.628016 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.628896 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.628943 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.628968 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.631890 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.632076 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.632331 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.632603 4765 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c" exitCode=0 Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.632645 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c"} Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.632808 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e5e615d8e6f1f9c90026a5c2d6a451b69b98f0d6d64cea268c7872cf5683b8ff"} Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.632937 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.633923 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.633956 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:20 crc kubenswrapper[4765]: I0121 13:02:20.633964 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:20 crc kubenswrapper[4765]: E0121 13:02:20.640793 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" interval="1.6s" Jan 21 13:02:20 crc kubenswrapper[4765]: W0121 13:02:20.696611 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.144:6443: connect: connection refused Jan 21 13:02:20 crc kubenswrapper[4765]: E0121 13:02:20.696713 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.144:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.169320 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.173778 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.173831 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.173844 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.173876 4765 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 13:02:21 crc kubenswrapper[4765]: E0121 13:02:21.174580 4765 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.144:6443: connect: connection refused" node="crc" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.177636 4765 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 13:02:21 crc kubenswrapper[4765]: E0121 13:02:21.179286 4765 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.144:6443: connect: connection refused" logger="UnhandledError" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.212238 4765 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.144:6443: connect: connection refused Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.233318 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 07:13:34.305516518 +0000 UTC Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.654597 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e73f5c3b6b993cba5ad746efdbe1e24cb5bd1ac653a80d6c47eaaff07d917eeb"} Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.654664 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"68a549a3dc26287c8cab6ffaaf643a3b7a9aee3ba27f10f0741c11412d152b69"} Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.659024 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9a5111055c302cebecfb649ba86b3c51d36213cdbebe7c90c5aadea87dc93399"} Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.659154 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.662665 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.662705 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.662717 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.686657 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34"} Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.686782 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53"} Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.686798 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b"} Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.691137 4765 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="55606e3a6437136b0fde36e2cb3f4c247406033825355a5d618f89a6391e7dd0" exitCode=0 Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.691258 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"55606e3a6437136b0fde36e2cb3f4c247406033825355a5d618f89a6391e7dd0"} Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.691454 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.692747 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.692783 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.692797 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.696928 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7d4bb3739eb8cd7744b7117f4db0817ff3feb326f9016dedb4bfb5dc0614ed0f"} Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.697183 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.698580 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.698617 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.698632 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.703893 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1"} Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.703962 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc"} Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.703980 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0"} Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.704120 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.705102 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.705134 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:21 crc kubenswrapper[4765]: I0121 13:02:21.705148 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.234287 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:26:26.100223562 +0000 UTC Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.710176 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5"} Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.710268 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.710276 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511"} Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.711191 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.711237 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.711248 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.711953 4765 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1d07e4f80f2e1879614e92aff0439f6973438a90958adb87e120d5c9250405a2" exitCode=0 Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.711983 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1d07e4f80f2e1879614e92aff0439f6973438a90958adb87e120d5c9250405a2"} Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.712058 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.712101 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.712832 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.712861 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.712871 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.712832 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.712940 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.712951 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.775516 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.776648 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.776687 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.776699 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.776723 4765 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 13:02:22 crc kubenswrapper[4765]: I0121 13:02:22.945787 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:23 crc kubenswrapper[4765]: I0121 13:02:23.234558 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 02:41:16.404865124 +0000 UTC Jan 21 13:02:23 crc kubenswrapper[4765]: I0121 13:02:23.720081 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a0296ec6f9d18e58e2c1f19d9f7f61cc00ef57658281599feec7b014ecf43162"} Jan 21 13:02:23 crc kubenswrapper[4765]: I0121 13:02:23.720151 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 13:02:23 crc kubenswrapper[4765]: I0121 13:02:23.720245 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:23 crc kubenswrapper[4765]: I0121 13:02:23.720151 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cc8349a56cdd9c532266c34e905bcd054cb930f4671e263a42c251885a57c135"} Jan 21 13:02:23 crc kubenswrapper[4765]: I0121 13:02:23.720658 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"087c8f655ee1401ba2ba2aef5114da1dc5517a2e680bb2d858466ec1697ed570"} Jan 21 13:02:23 crc kubenswrapper[4765]: I0121 13:02:23.721493 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:23 crc kubenswrapper[4765]: I0121 13:02:23.721542 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:23 crc kubenswrapper[4765]: I0121 13:02:23.721558 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.235493 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:04:04.305198602 +0000 UTC Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.256878 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.257169 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.258880 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.259058 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.259164 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.337859 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.338116 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.339066 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.339758 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.339804 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.339819 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.728269 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.728359 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.728245 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"19a1ed0c0a13ba06d039eb00c93955eee168f9b8726b9d91d9da25089b6af568"} Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.728436 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6d19188e4bc56de598038fb0893a406da76c02448ac52d5bdf7905096774603e"} Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.729238 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.729247 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.729301 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.729322 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.729348 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.729363 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.784431 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:24 crc kubenswrapper[4765]: I0121 13:02:24.826969 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 13:02:25 crc kubenswrapper[4765]: I0121 13:02:25.237477 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:38:41.125520462 +0000 UTC Jan 21 13:02:25 crc kubenswrapper[4765]: I0121 13:02:25.325575 4765 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 13:02:25 crc kubenswrapper[4765]: I0121 13:02:25.731047 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:25 crc kubenswrapper[4765]: I0121 13:02:25.731048 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:25 crc kubenswrapper[4765]: I0121 13:02:25.732170 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:25 crc kubenswrapper[4765]: I0121 13:02:25.732201 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:25 crc kubenswrapper[4765]: I0121 13:02:25.732230 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:25 crc kubenswrapper[4765]: I0121 13:02:25.732248 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:25 crc kubenswrapper[4765]: I0121 13:02:25.732306 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:25 crc kubenswrapper[4765]: I0121 13:02:25.732318 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:26 crc kubenswrapper[4765]: I0121 13:02:26.237916 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:58:41.696592875 +0000 UTC Jan 21 13:02:26 crc kubenswrapper[4765]: I0121 13:02:26.733411 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:26 crc kubenswrapper[4765]: I0121 13:02:26.733411 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:26 crc kubenswrapper[4765]: I0121 13:02:26.734340 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:26 crc kubenswrapper[4765]: I0121 13:02:26.734354 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:26 crc kubenswrapper[4765]: I0121 13:02:26.734403 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:26 crc kubenswrapper[4765]: I0121 13:02:26.734377 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:26 crc kubenswrapper[4765]: I0121 13:02:26.734429 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:26 crc kubenswrapper[4765]: I0121 13:02:26.734493 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:26 crc kubenswrapper[4765]: I0121 13:02:26.868248 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:26 crc kubenswrapper[4765]: I0121 13:02:26.868521 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:26 crc kubenswrapper[4765]: I0121 13:02:26.869932 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:26 crc kubenswrapper[4765]: I0121 13:02:26.869978 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:26 crc kubenswrapper[4765]: I0121 13:02:26.869988 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:27 crc kubenswrapper[4765]: I0121 13:02:27.239086 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:12:05.517980229 +0000 UTC Jan 21 13:02:28 crc kubenswrapper[4765]: I0121 13:02:28.240300 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:52:27.528085387 +0000 UTC Jan 21 13:02:29 crc kubenswrapper[4765]: I0121 13:02:29.241089 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:19:00.492185366 +0000 UTC Jan 21 13:02:29 crc kubenswrapper[4765]: I0121 13:02:29.373326 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 13:02:29 crc kubenswrapper[4765]: I0121 13:02:29.373606 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:29 crc kubenswrapper[4765]: I0121 13:02:29.374824 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:29 crc kubenswrapper[4765]: I0121 13:02:29.374862 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:29 crc kubenswrapper[4765]: I0121 13:02:29.374876 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:29 crc kubenswrapper[4765]: E0121 13:02:29.670106 4765 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 13:02:30 crc kubenswrapper[4765]: I0121 13:02:30.241908 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:16:46.231789336 +0000 UTC Jan 21 13:02:30 crc kubenswrapper[4765]: I0121 13:02:30.320349 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:30 crc kubenswrapper[4765]: I0121 13:02:30.320563 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:30 crc kubenswrapper[4765]: I0121 13:02:30.321631 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:30 crc kubenswrapper[4765]: I0121 13:02:30.321695 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:30 crc kubenswrapper[4765]: I0121 13:02:30.321707 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:30 crc kubenswrapper[4765]: I0121 13:02:30.327414 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:30 crc kubenswrapper[4765]: I0121 13:02:30.743579 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:30 crc kubenswrapper[4765]: I0121 13:02:30.744832 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:30 crc kubenswrapper[4765]: I0121 13:02:30.744874 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:30 crc kubenswrapper[4765]: I0121 13:02:30.744888 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:30 crc kubenswrapper[4765]: I0121 13:02:30.748433 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:31 crc kubenswrapper[4765]: I0121 13:02:31.242443 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:30:18.7310331 +0000 UTC Jan 21 13:02:31 crc kubenswrapper[4765]: I0121 13:02:31.585973 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:31 crc kubenswrapper[4765]: I0121 13:02:31.779338 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:31 crc kubenswrapper[4765]: I0121 13:02:31.780411 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:31 crc kubenswrapper[4765]: I0121 13:02:31.780448 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:31 crc kubenswrapper[4765]: I0121 13:02:31.780458 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:32 crc kubenswrapper[4765]: W0121 13:02:32.105297 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 13:02:32 crc kubenswrapper[4765]: I0121 13:02:32.105617 4765 trace.go:236] Trace[1155712381]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 13:02:22.103) (total time: 10002ms): Jan 21 13:02:32 crc kubenswrapper[4765]: Trace[1155712381]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:02:32.105) Jan 21 13:02:32 crc kubenswrapper[4765]: Trace[1155712381]: [10.002066099s] [10.002066099s] END Jan 21 13:02:32 crc kubenswrapper[4765]: E0121 13:02:32.105754 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 13:02:32 crc kubenswrapper[4765]: E0121 13:02:32.242057 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 21 13:02:32 crc kubenswrapper[4765]: I0121 13:02:32.243495 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:49:42.898308367 +0000 UTC Jan 21 13:02:32 crc kubenswrapper[4765]: I0121 13:02:32.247846 4765 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 13:02:32 crc kubenswrapper[4765]: W0121 13:02:32.476137 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 13:02:32 crc kubenswrapper[4765]: I0121 13:02:32.476317 4765 trace.go:236] Trace[1179738818]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 13:02:22.474) (total time: 10001ms): Jan 21 13:02:32 crc kubenswrapper[4765]: Trace[1179738818]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:02:32.476) Jan 21 13:02:32 crc kubenswrapper[4765]: Trace[1179738818]: [10.001369538s] [10.001369538s] END Jan 21 13:02:32 crc kubenswrapper[4765]: E0121 13:02:32.476367 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 13:02:32 crc kubenswrapper[4765]: W0121 13:02:32.511128 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 13:02:32 crc kubenswrapper[4765]: I0121 13:02:32.511268 4765 trace.go:236] Trace[843665576]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 13:02:22.509) (total time: 10001ms): Jan 21 13:02:32 crc kubenswrapper[4765]: Trace[843665576]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:02:32.511) Jan 21 13:02:32 crc kubenswrapper[4765]: Trace[843665576]: [10.001600921s] [10.001600921s] END Jan 21 13:02:32 crc kubenswrapper[4765]: E0121 13:02:32.511297 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 13:02:32 crc kubenswrapper[4765]: E0121 13:02:32.594666 4765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.188cc09e2b1a7814 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 13:02:19.210283028 +0000 UTC m=+0.228008850,LastTimestamp:2026-01-21 13:02:19.210283028 +0000 UTC m=+0.228008850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 13:02:32 crc kubenswrapper[4765]: W0121 13:02:32.620327 4765 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 13:02:32 crc kubenswrapper[4765]: I0121 13:02:32.620439 4765 trace.go:236] Trace[1432702840]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 13:02:22.618) (total time: 10001ms): Jan 21 13:02:32 crc kubenswrapper[4765]: Trace[1432702840]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:02:32.620) Jan 21 13:02:32 crc kubenswrapper[4765]: Trace[1432702840]: [10.0014199s] [10.0014199s] END Jan 21 13:02:32 crc kubenswrapper[4765]: E0121 13:02:32.620464 4765 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 13:02:32 crc kubenswrapper[4765]: E0121 13:02:32.833617 4765 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 21 13:02:32 crc kubenswrapper[4765]: I0121 13:02:32.838133 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:32 crc kubenswrapper[4765]: I0121 13:02:32.839043 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:32 crc kubenswrapper[4765]: I0121 13:02:32.839086 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:32 crc kubenswrapper[4765]: I0121 13:02:32.839099 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:33 crc kubenswrapper[4765]: I0121 13:02:32.946670 4765 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 13:02:33 crc kubenswrapper[4765]: I0121 13:02:32.946747 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 13:02:33 crc kubenswrapper[4765]: I0121 13:02:33.244389 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:13:46.589354287 +0000 UTC Jan 21 13:02:34 crc kubenswrapper[4765]: I0121 13:02:34.244917 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 06:55:48.763262799 +0000 UTC Jan 21 13:02:34 crc kubenswrapper[4765]: I0121 13:02:34.586832 4765 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 13:02:34 crc kubenswrapper[4765]: I0121 13:02:34.586942 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 13:02:35 crc kubenswrapper[4765]: I0121 13:02:35.216627 4765 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 13:02:35 crc kubenswrapper[4765]: I0121 13:02:35.216708 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 13:02:35 crc kubenswrapper[4765]: I0121 13:02:35.245167 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 00:20:19.740570616 +0000 UTC Jan 21 13:02:36 crc kubenswrapper[4765]: I0121 13:02:36.033947 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:36 crc kubenswrapper[4765]: I0121 13:02:36.064472 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:36 crc kubenswrapper[4765]: I0121 13:02:36.064544 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:36 crc kubenswrapper[4765]: I0121 13:02:36.064558 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:36 crc kubenswrapper[4765]: I0121 13:02:36.064595 4765 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 13:02:36 crc kubenswrapper[4765]: I0121 13:02:36.245559 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:21:16.68256901 +0000 UTC Jan 21 13:02:37 crc kubenswrapper[4765]: I0121 13:02:37.009663 4765 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 13:02:37 crc kubenswrapper[4765]: I0121 13:02:37.103796 4765 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 13:02:37 crc kubenswrapper[4765]: I0121 13:02:37.279343 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:46:51.728422455 +0000 UTC Jan 21 13:02:37 crc kubenswrapper[4765]: I0121 13:02:37.952313 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:37 crc kubenswrapper[4765]: I0121 13:02:37.952968 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:37 crc kubenswrapper[4765]: I0121 13:02:37.954403 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:37 crc kubenswrapper[4765]: I0121 13:02:37.954449 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:37 crc kubenswrapper[4765]: I0121 13:02:37.954465 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:37 crc kubenswrapper[4765]: I0121 13:02:37.956873 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:38 crc kubenswrapper[4765]: I0121 13:02:38.280521 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:11:35.44903717 +0000 UTC Jan 21 13:02:38 crc kubenswrapper[4765]: I0121 13:02:38.855841 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 13:02:38 crc kubenswrapper[4765]: I0121 13:02:38.855912 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:38 crc kubenswrapper[4765]: I0121 13:02:38.857133 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:38 crc kubenswrapper[4765]: I0121 13:02:38.857195 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:38 crc kubenswrapper[4765]: I0121 13:02:38.857236 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:39 crc kubenswrapper[4765]: I0121 13:02:39.281570 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:36:57.605604799 +0000 UTC Jan 21 13:02:39 crc kubenswrapper[4765]: I0121 13:02:39.415807 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 13:02:39 crc kubenswrapper[4765]: I0121 13:02:39.416047 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:39 crc kubenswrapper[4765]: I0121 13:02:39.417558 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:39 crc kubenswrapper[4765]: I0121 13:02:39.417598 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:39 crc kubenswrapper[4765]: I0121 13:02:39.417616 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:39 crc kubenswrapper[4765]: I0121 13:02:39.428915 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 13:02:39 crc kubenswrapper[4765]: E0121 13:02:39.670286 4765 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 13:02:39 crc kubenswrapper[4765]: I0121 13:02:39.859002 4765 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 13:02:39 crc kubenswrapper[4765]: I0121 13:02:39.860067 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:39 crc kubenswrapper[4765]: I0121 13:02:39.860143 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:39 crc kubenswrapper[4765]: I0121 13:02:39.860159 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:40 crc kubenswrapper[4765]: I0121 13:02:40.220824 4765 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 13:02:40 crc kubenswrapper[4765]: I0121 13:02:40.410769 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 21:45:22.79331159 +0000 UTC Jan 21 13:02:40 crc kubenswrapper[4765]: I0121 13:02:40.418256 4765 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 13:02:40 crc kubenswrapper[4765]: I0121 13:02:40.428075 4765 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58802->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 13:02:40 crc kubenswrapper[4765]: I0121 13:02:40.428170 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58802->192.168.126.11:17697: read: connection reset by peer" Jan 21 13:02:40 crc kubenswrapper[4765]: I0121 13:02:40.428410 4765 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58792->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 13:02:40 crc kubenswrapper[4765]: I0121 13:02:40.428515 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58792->192.168.126.11:17697: read: connection reset by peer" Jan 21 13:02:40 crc kubenswrapper[4765]: I0121 13:02:40.428703 4765 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 13:02:40 crc kubenswrapper[4765]: I0121 13:02:40.428826 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 13:02:40 crc kubenswrapper[4765]: I0121 13:02:40.849536 4765 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 13:02:40 crc kubenswrapper[4765]: I0121 13:02:40.863689 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 13:02:40 crc kubenswrapper[4765]: I0121 13:02:40.865265 4765 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5" exitCode=255 Jan 21 13:02:40 crc kubenswrapper[4765]: I0121 13:02:40.865314 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5"} Jan 21 13:02:40 crc kubenswrapper[4765]: I0121 13:02:40.884388 4765 scope.go:117] "RemoveContainer" containerID="691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.134104 4765 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.134433 4765 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.135887 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.135946 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.135962 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.135987 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.136002 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:41Z","lastTransitionTime":"2026-01-21T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.164556 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.170549 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.170604 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.170619 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.170637 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.170657 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:41Z","lastTransitionTime":"2026-01-21T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.184325 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.190933 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.190973 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.190987 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.191009 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.191024 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:41Z","lastTransitionTime":"2026-01-21T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.279548 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.286248 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.286299 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.286314 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.286336 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.286352 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:41Z","lastTransitionTime":"2026-01-21T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.301075 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.306038 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.306091 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.306104 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.306123 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.306134 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:41Z","lastTransitionTime":"2026-01-21T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.333637 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.333802 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.335557 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.335631 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.335648 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.335679 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.335696 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:41Z","lastTransitionTime":"2026-01-21T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.411672 4765 apiserver.go:52] "Watching apiserver" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.411689 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:57:25.811658924 +0000 UTC Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.414771 4765 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.438611 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.438649 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.438660 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.438678 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.438692 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:41Z","lastTransitionTime":"2026-01-21T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.475033 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.475477 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.475600 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.475693 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.475700 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.475770 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.475913 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.476311 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.476471 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.476614 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.485020 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.485345 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.488743 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.488970 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.489010 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.489157 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.489297 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.489467 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.541583 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.541875 4765 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.547828 4765 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.551628 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.551676 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.551690 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.551753 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.551812 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:41Z","lastTransitionTime":"2026-01-21T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.572297 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.595568 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.602898 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.641947 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642062 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642096 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642144 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642168 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642189 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642252 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642277 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642325 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642350 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642376 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642513 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642560 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642585 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642608 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642632 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642655 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642682 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642718 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642739 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642791 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642817 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642839 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642863 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642888 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642881 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642948 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.642982 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643059 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643084 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643105 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643127 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643172 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643224 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643248 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643276 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643302 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643324 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643340 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643347 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643383 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643411 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643434 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643457 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643485 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643501 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643510 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643535 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643561 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643589 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643612 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643635 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643659 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643698 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643722 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643744 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643767 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643817 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643840 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643915 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643939 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643966 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643993 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644018 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644041 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644063 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644090 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644116 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644140 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644164 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644186 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644232 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644257 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644280 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644308 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644332 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644356 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644380 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644405 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644430 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644455 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644480 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644654 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644686 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644714 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644745 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644775 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644802 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644829 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644859 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644887 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644914 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644940 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644967 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644994 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645020 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645044 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645068 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645094 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645120 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645142 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645163 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645184 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645241 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645267 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645289 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645314 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645338 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645365 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645401 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645427 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645452 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645476 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645498 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645522 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645547 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645569 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645594 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645617 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645638 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645662 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645694 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645718 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645743 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645768 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645797 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645821 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645847 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645870 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645894 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645916 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645941 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645969 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645994 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646019 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646042 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646066 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646090 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646114 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646138 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646160 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646190 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646238 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646265 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646292 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646314 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646336 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646363 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646386 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646410 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646436 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646460 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646485 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646510 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646532 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646560 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646583 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646609 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646662 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646692 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646729 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646770 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646799 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646823 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646848 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646944 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646972 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646999 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647026 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647075 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647102 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647127 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647154 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647183 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647253 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647279 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647306 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647330 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647381 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647408 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647433 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647462 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647486 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647511 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647539 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647563 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647587 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647610 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647634 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647658 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647682 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647708 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647731 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647751 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647774 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647801 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647825 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647900 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.647981 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.648015 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.648041 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.648067 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.648121 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.648148 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.648176 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.648227 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.648256 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.648282 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.648308 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.648333 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.648360 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.648385 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.648475 4765 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.648836 4765 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.649523 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.649672 4765 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.643931 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644201 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644362 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644687 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644815 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.644944 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645068 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645194 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645552 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.645724 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646051 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646187 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646334 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646464 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646615 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646795 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.646922 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.649235 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.649257 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.649608 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.649829 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.650012 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.650126 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.650584 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.650621 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.651026 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.651087 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.651367 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.651396 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.651638 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.651665 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.651888 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.652341 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.652814 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.653073 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.653274 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.653496 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.653661 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.653920 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.654371 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.654627 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.654824 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.655068 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.655302 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.655605 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.656095 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.656433 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.656685 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.657025 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.657238 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.657817 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.657948 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.658362 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.658657 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.659082 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.659108 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.659529 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.659732 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.660106 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.660188 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.660707 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.661057 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.661120 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.661239 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:02:42.16119846 +0000 UTC m=+23.178924282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.668425 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.668904 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.669278 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.669755 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.670257 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.670698 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.671077 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.672259 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.673439 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.675301 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.675495 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.675737 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.677451 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.677469 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.677478 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.677492 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.677501 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:41Z","lastTransitionTime":"2026-01-21T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.661447 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.661456 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.661758 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.661834 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.662065 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.662368 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.662419 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.662629 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.662778 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.662938 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.663112 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.663173 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.663448 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.663538 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.663634 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.664108 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.664348 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.664472 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.664721 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.665061 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.665364 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.665626 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.665919 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.666329 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.666646 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.666888 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.667320 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.678554 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.678780 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.680137 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.680243 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.680449 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.680651 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.680786 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.680876 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.681055 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.681191 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.681277 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.681369 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.681469 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.681485 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.681663 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.681673 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.681883 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.682287 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.682554 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.683323 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.683334 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.683400 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.683882 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.684091 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.684126 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.684623 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.684629 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.684933 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.685016 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.685169 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.685538 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.686737 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.686812 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.687063 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.687290 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.687319 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.688316 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.688316 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.688724 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.689066 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.689502 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.689741 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.689911 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.690484 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.690661 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.690707 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.690926 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.691172 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.691183 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.691381 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.691440 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.691705 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.691749 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.692003 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.692143 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.692153 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.692331 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.692383 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.692491 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.692595 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.692820 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.694006 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.694271 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.694703 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.695071 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.698866 4765 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.706893 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.707285 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.707368 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.707437 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.707594 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:42.207571032 +0000 UTC m=+23.225296854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.711261 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.711630 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.711665 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.711752 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:42.211727045 +0000 UTC m=+23.229453057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.711832 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.711836 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.711875 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:42.211866059 +0000 UTC m=+23.229592111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.716538 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.718643 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.718836 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.719258 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.719456 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.720150 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.725973 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.727770 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.728123 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.729915 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.731110 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.734773 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.735260 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.735581 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.735613 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.735630 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:41 crc kubenswrapper[4765]: E0121 13:02:41.736794 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:42.236752547 +0000 UTC m=+23.254478369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.737005 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.738287 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.744987 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.745135 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.745388 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.745552 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.746261 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.746368 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.746726 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.749241 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.749873 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.750734 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.750802 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.750994 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751022 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751034 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751074 4765 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751089 4765 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751101 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751113 4765 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751154 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751168 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751180 4765 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751192 4765 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751233 4765 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751249 4765 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751262 4765 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751274 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751287 4765 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751299 4765 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751343 4765 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751357 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751368 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751379 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751391 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751402 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751417 4765 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751428 4765 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751440 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751452 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751463 4765 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751476 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751487 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751499 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751512 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751524 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751536 4765 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751547 4765 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751558 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751569 4765 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751580 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751592 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751603 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751615 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751627 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751638 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751649 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751660 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751671 4765 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751683 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751696 4765 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751708 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751719 4765 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751730 4765 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751742 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751754 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751767 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751779 4765 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751791 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751804 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751816 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751829 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751840 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751853 4765 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751865 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751878 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751891 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751925 4765 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751955 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751967 4765 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751979 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.751989 4765 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752001 4765 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752013 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752024 4765 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752036 4765 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752065 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752080 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752094 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752121 4765 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752137 4765 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752149 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752162 4765 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752173 4765 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752185 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752197 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752237 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752251 4765 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752263 4765 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752275 4765 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752288 4765 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752299 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752312 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752325 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752338 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752350 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752362 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752373 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752385 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752397 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752411 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752425 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752436 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752448 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752460 4765 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752472 4765 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752483 4765 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752495 4765 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752509 4765 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752523 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752537 4765 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752549 4765 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752562 4765 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752575 4765 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752588 4765 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752601 4765 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752614 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752627 4765 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752640 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752653 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752666 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752678 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752691 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752704 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752716 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752728 4765 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752740 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752753 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752764 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752774 4765 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752786 4765 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752797 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752808 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752819 4765 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752830 4765 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752841 4765 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752852 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752863 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752875 4765 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752886 4765 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752898 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752909 4765 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752921 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752934 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752945 4765 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752958 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752970 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.752983 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753012 4765 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753024 4765 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753056 4765 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753069 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753080 4765 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753091 4765 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753104 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753117 4765 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753129 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753142 4765 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753154 4765 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753170 4765 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753182 4765 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753195 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753224 4765 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753236 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753248 4765 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753260 4765 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753271 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753284 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753295 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753307 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753320 4765 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753331 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753343 4765 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753355 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753368 4765 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753381 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753392 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753403 4765 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753414 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753425 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753436 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753448 4765 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753459 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753470 4765 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753481 4765 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753491 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753503 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753514 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753525 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753537 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753830 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.753872 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.756492 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.759140 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.761066 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.762911 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.773443 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.786705 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.787133 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.787239 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.787254 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.787275 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.787307 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:41Z","lastTransitionTime":"2026-01-21T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.813743 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.868314 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.869369 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.869415 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.869428 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.869441 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.869502 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.957548 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.960172 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.960349 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.960420 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.960587 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.960669 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:41Z","lastTransitionTime":"2026-01-21T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.966836 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.970230 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245"} Jan 21 13:02:41 crc kubenswrapper[4765]: I0121 13:02:41.970308 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.037785 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.068824 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.073667 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.073693 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.073701 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.073716 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.073727 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:42Z","lastTransitionTime":"2026-01-21T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.092525 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.112952 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:42 crc kubenswrapper[4765]: W0121 13:02:42.118994 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-d4e7deb14f5210bfb9cbe21881ca9ed7e235c3db2aacd302be1db742e9d0f313 WatchSource:0}: Error finding container d4e7deb14f5210bfb9cbe21881ca9ed7e235c3db2aacd302be1db742e9d0f313: Status 404 returned error can't find the container with id d4e7deb14f5210bfb9cbe21881ca9ed7e235c3db2aacd302be1db742e9d0f313 Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.126566 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.146506 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.176309 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.176352 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.176361 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.176393 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.176405 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:42Z","lastTransitionTime":"2026-01-21T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.240356 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.240475 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.240509 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.240539 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.240569 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:42 crc kubenswrapper[4765]: E0121 13:02:42.240741 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:02:42 crc kubenswrapper[4765]: E0121 13:02:42.240796 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:02:42 crc kubenswrapper[4765]: E0121 13:02:42.240812 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:42 crc kubenswrapper[4765]: E0121 13:02:42.240862 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:02:42 crc kubenswrapper[4765]: E0121 13:02:42.240889 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:02:42 crc kubenswrapper[4765]: E0121 13:02:42.240929 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:02:42 crc kubenswrapper[4765]: E0121 13:02:42.240945 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:42 crc kubenswrapper[4765]: E0121 13:02:42.240884 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:43.240860749 +0000 UTC m=+24.258586571 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:42 crc kubenswrapper[4765]: E0121 13:02:42.240960 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:02:42 crc kubenswrapper[4765]: E0121 13:02:42.240992 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:43.240968161 +0000 UTC m=+24.258693983 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:02:42 crc kubenswrapper[4765]: E0121 13:02:42.241013 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:43.241001552 +0000 UTC m=+24.258727374 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:42 crc kubenswrapper[4765]: E0121 13:02:42.241029 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:43.241021823 +0000 UTC m=+24.258747645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:02:42 crc kubenswrapper[4765]: E0121 13:02:42.241074 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:02:43.241063084 +0000 UTC m=+24.258789066 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.243834 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.256800 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.280200 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.295552 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.307985 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.321408 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.331497 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.331553 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.331568 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.331585 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.331612 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:42Z","lastTransitionTime":"2026-01-21T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.438711 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:08:47.129371406 +0000 UTC Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.463791 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.463844 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.463856 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.463879 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.463906 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:42Z","lastTransitionTime":"2026-01-21T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.566831 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.566888 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.566900 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.566921 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.566933 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:42Z","lastTransitionTime":"2026-01-21T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.641140 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:42 crc kubenswrapper[4765]: E0121 13:02:42.641337 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.641416 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:42 crc kubenswrapper[4765]: E0121 13:02:42.641472 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.703088 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.703137 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.703149 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.703167 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.703180 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:42Z","lastTransitionTime":"2026-01-21T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.857680 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.857719 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.857729 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.857794 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.857807 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:42Z","lastTransitionTime":"2026-01-21T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.960168 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.960492 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.960590 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.960681 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.960753 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:42Z","lastTransitionTime":"2026-01-21T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.973756 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f"} Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.974158 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"dd6906a28d821e50a7f4b07d9172c96512196fdb91a180e114493279aefa9d29"} Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.975724 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d4e7deb14f5210bfb9cbe21881ca9ed7e235c3db2aacd302be1db742e9d0f313"} Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.977779 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6"} Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.978013 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b"} Jan 21 13:02:42 crc kubenswrapper[4765]: I0121 13:02:42.978330 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"99b80a681e76c7b702538c5ddedbbb36365f86116f6d03a28ca753e99c88a3b0"} Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.102686 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.102878 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.102989 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.103071 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.103138 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:43Z","lastTransitionTime":"2026-01-21T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.205540 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.207725 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.207858 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.207993 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.208057 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.208110 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:43Z","lastTransitionTime":"2026-01-21T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.211899 4765 csr.go:261] certificate signing request csr-8k7c2 is approved, waiting to be issued Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.333166 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.333290 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.333325 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.333354 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.333370 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:43 crc kubenswrapper[4765]: E0121 13:02:43.333449 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:02:43 crc kubenswrapper[4765]: E0121 13:02:43.333508 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:45.33349153 +0000 UTC m=+26.351217352 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.333770 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:43 crc kubenswrapper[4765]: E0121 13:02:43.333875 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:02:45.33386701 +0000 UTC m=+26.351592832 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:02:43 crc kubenswrapper[4765]: E0121 13:02:43.333975 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:02:43 crc kubenswrapper[4765]: E0121 13:02:43.334104 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:02:43 crc kubenswrapper[4765]: E0121 13:02:43.334122 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:43 crc kubenswrapper[4765]: E0121 13:02:43.334147 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:45.334140978 +0000 UTC m=+26.351866800 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:43 crc kubenswrapper[4765]: E0121 13:02:43.334027 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:02:43 crc kubenswrapper[4765]: E0121 13:02:43.334180 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:45.334172969 +0000 UTC m=+26.351898791 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:02:43 crc kubenswrapper[4765]: E0121 13:02:43.334070 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:02:43 crc kubenswrapper[4765]: E0121 13:02:43.334194 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:02:43 crc kubenswrapper[4765]: E0121 13:02:43.334200 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:43 crc kubenswrapper[4765]: E0121 13:02:43.334235 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:45.3342294 +0000 UTC m=+26.351955222 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.339374 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.339439 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.339509 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.339532 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.339553 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:43Z","lastTransitionTime":"2026-01-21T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.383305 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.419666 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.504674 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 14:20:56.255927938 +0000 UTC Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.563354 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.563413 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.563423 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.563440 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.563451 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:43Z","lastTransitionTime":"2026-01-21T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.687369 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:43 crc kubenswrapper[4765]: E0121 13:02:43.687544 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.689526 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.689736 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.689911 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.690042 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.690192 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:43Z","lastTransitionTime":"2026-01-21T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.793810 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.794285 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.794446 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.794619 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.794744 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:43Z","lastTransitionTime":"2026-01-21T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.807109 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.823588 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.824439 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.825775 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.826510 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.827585 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.854452 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.855583 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.856776 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.857556 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.858627 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.859266 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.860733 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.861511 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.862167 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.863330 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.863982 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.866291 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.866902 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.868517 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.870189 4765 csr.go:257] certificate signing request csr-8k7c2 is issued Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.870508 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.871598 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.872718 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.873831 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.874418 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.875698 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.876366 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.878901 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.880043 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.880906 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.881943 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.883093 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.883674 4765 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.883784 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.918716 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.919623 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.920090 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.922958 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.923963 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.925346 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.926180 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.927658 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.928425 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.930753 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.933032 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.934380 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.934938 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.937556 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.938085 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.938131 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.938143 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.938165 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.938178 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:43Z","lastTransitionTime":"2026-01-21T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.939087 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.941633 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.943853 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.944564 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.945952 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.946587 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.947746 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.948910 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.949578 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.962914 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-w5x22"] Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.963278 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-gmkg6"] Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.963582 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gmkg6" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.963981 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-w5x22" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.972822 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.973251 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.973398 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.973521 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.973709 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.975544 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 13:02:43 crc kubenswrapper[4765]: I0121 13:02:43.977973 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.001554 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52z2f\" (UniqueName: \"kubernetes.io/projected/8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef-kube-api-access-52z2f\") pod \"node-resolver-w5x22\" (UID: \"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\") " pod="openshift-dns/node-resolver-w5x22" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.001599 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1d638a9b-eb82-48af-bf7a-dbfc68b5c931-serviceca\") pod \"node-ca-gmkg6\" (UID: \"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\") " pod="openshift-image-registry/node-ca-gmkg6" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.001625 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d638a9b-eb82-48af-bf7a-dbfc68b5c931-host\") pod \"node-ca-gmkg6\" (UID: \"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\") " pod="openshift-image-registry/node-ca-gmkg6" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.001684 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef-hosts-file\") pod \"node-resolver-w5x22\" (UID: \"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\") " pod="openshift-dns/node-resolver-w5x22" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.001708 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxkj9\" (UniqueName: \"kubernetes.io/projected/1d638a9b-eb82-48af-bf7a-dbfc68b5c931-kube-api-access-mxkj9\") pod \"node-ca-gmkg6\" (UID: \"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\") " pod="openshift-image-registry/node-ca-gmkg6" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.089077 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.089151 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.089173 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.089202 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.089307 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:44Z","lastTransitionTime":"2026-01-21T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.103569 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef-hosts-file\") pod \"node-resolver-w5x22\" (UID: \"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\") " pod="openshift-dns/node-resolver-w5x22" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.103620 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxkj9\" (UniqueName: \"kubernetes.io/projected/1d638a9b-eb82-48af-bf7a-dbfc68b5c931-kube-api-access-mxkj9\") pod \"node-ca-gmkg6\" (UID: \"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\") " pod="openshift-image-registry/node-ca-gmkg6" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.103693 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52z2f\" (UniqueName: \"kubernetes.io/projected/8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef-kube-api-access-52z2f\") pod \"node-resolver-w5x22\" (UID: \"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\") " pod="openshift-dns/node-resolver-w5x22" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.103735 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1d638a9b-eb82-48af-bf7a-dbfc68b5c931-serviceca\") pod \"node-ca-gmkg6\" (UID: \"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\") " pod="openshift-image-registry/node-ca-gmkg6" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.103811 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d638a9b-eb82-48af-bf7a-dbfc68b5c931-host\") pod \"node-ca-gmkg6\" (UID: \"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\") " pod="openshift-image-registry/node-ca-gmkg6" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.103904 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d638a9b-eb82-48af-bf7a-dbfc68b5c931-host\") pod \"node-ca-gmkg6\" (UID: \"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\") " pod="openshift-image-registry/node-ca-gmkg6" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.104712 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef-hosts-file\") pod \"node-resolver-w5x22\" (UID: \"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\") " pod="openshift-dns/node-resolver-w5x22" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.183926 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1d638a9b-eb82-48af-bf7a-dbfc68b5c931-serviceca\") pod \"node-ca-gmkg6\" (UID: \"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\") " pod="openshift-image-registry/node-ca-gmkg6" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.193873 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.193906 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.193917 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.193933 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.193945 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:44Z","lastTransitionTime":"2026-01-21T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.296172 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.296202 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.296281 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.296297 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.296310 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:44Z","lastTransitionTime":"2026-01-21T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.325849 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.336651 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.349067 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52z2f\" (UniqueName: \"kubernetes.io/projected/8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef-kube-api-access-52z2f\") pod \"node-resolver-w5x22\" (UID: \"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\") " pod="openshift-dns/node-resolver-w5x22" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.356770 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxkj9\" (UniqueName: \"kubernetes.io/projected/1d638a9b-eb82-48af-bf7a-dbfc68b5c931-kube-api-access-mxkj9\") pod \"node-ca-gmkg6\" (UID: \"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\") " pod="openshift-image-registry/node-ca-gmkg6" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.388786 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.389122 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gmkg6" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.406402 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.406438 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.406448 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.406465 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.406478 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:44Z","lastTransitionTime":"2026-01-21T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.509142 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 01:54:14.416077062 +0000 UTC Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.511958 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.512009 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.512017 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.512034 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.512046 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:44Z","lastTransitionTime":"2026-01-21T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.549679 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.560478 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-w5x22" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.612887 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:44 crc kubenswrapper[4765]: E0121 13:02:44.613037 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.613124 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:44 crc kubenswrapper[4765]: E0121 13:02:44.613184 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.617037 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.617060 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.617069 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.617081 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.617091 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:44Z","lastTransitionTime":"2026-01-21T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.721157 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.721229 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.721246 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.721266 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:44 crc kubenswrapper[4765]: I0121 13:02:44.721280 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:44Z","lastTransitionTime":"2026-01-21T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:44.874352 4765 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 12:57:43 +0000 UTC, rotation deadline is 2026-11-17 21:12:11.107148755 +0000 UTC Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:44.874473 4765 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7208h9m26.232712611s for next certificate rotation Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.365668 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.365854 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.365881 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.365899 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.365919 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:45 crc kubenswrapper[4765]: E0121 13:02:45.366046 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:02:45 crc kubenswrapper[4765]: E0121 13:02:45.366062 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:02:45 crc kubenswrapper[4765]: E0121 13:02:45.366073 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:45 crc kubenswrapper[4765]: E0121 13:02:45.366132 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:49.366115368 +0000 UTC m=+30.383841190 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:45 crc kubenswrapper[4765]: E0121 13:02:45.366189 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:02:49.36618344 +0000 UTC m=+30.383909262 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:02:45 crc kubenswrapper[4765]: E0121 13:02:45.366244 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:02:45 crc kubenswrapper[4765]: E0121 13:02:45.366252 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:02:45 crc kubenswrapper[4765]: E0121 13:02:45.366259 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:45 crc kubenswrapper[4765]: E0121 13:02:45.366280 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:49.366273342 +0000 UTC m=+30.383999164 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:45 crc kubenswrapper[4765]: E0121 13:02:45.366322 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:02:45 crc kubenswrapper[4765]: E0121 13:02:45.366345 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:49.366339474 +0000 UTC m=+30.384065296 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:02:45 crc kubenswrapper[4765]: E0121 13:02:45.366385 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:02:45 crc kubenswrapper[4765]: E0121 13:02:45.366405 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:49.366399836 +0000 UTC m=+30.384125658 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.421585 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.421629 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.421637 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.421653 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.421664 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:45Z","lastTransitionTime":"2026-01-21T13:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.424235 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-w5x22" event={"ID":"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef","Type":"ContainerStarted","Data":"522b0c37c19650489dfcbe9c0f547fabd4526d81ea32c13b7f3609506214651b"} Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.430677 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gmkg6" event={"ID":"1d638a9b-eb82-48af-bf7a-dbfc68b5c931","Type":"ContainerStarted","Data":"ec40e4fe4c4e79573b27327453d9664dfab1024a2a3ae807f18c7655f449bc1c"} Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.434652 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.510353 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 05:41:14.963849272 +0000 UTC Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.556581 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.556624 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.556633 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.556648 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.556659 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:45Z","lastTransitionTime":"2026-01-21T13:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.613450 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:45 crc kubenswrapper[4765]: E0121 13:02:45.613691 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.659912 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.660096 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.660119 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.660170 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.660190 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:45Z","lastTransitionTime":"2026-01-21T13:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.695639 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:45Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.762624 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.763102 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.763121 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.763144 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.763164 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:45Z","lastTransitionTime":"2026-01-21T13:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.795487 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-v72nq"] Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.797398 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.817094 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.817547 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.821028 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.826892 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.844128 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.872671 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.872710 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.872720 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.872743 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.872759 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:45Z","lastTransitionTime":"2026-01-21T13:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.873180 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vprvz\" (UniqueName: \"kubernetes.io/projected/e149390c-e4da-4dfd-bed2-b14de058f921-kube-api-access-vprvz\") pod \"machine-config-daemon-v72nq\" (UID: \"e149390c-e4da-4dfd-bed2-b14de058f921\") " pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.873225 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e149390c-e4da-4dfd-bed2-b14de058f921-mcd-auth-proxy-config\") pod \"machine-config-daemon-v72nq\" (UID: \"e149390c-e4da-4dfd-bed2-b14de058f921\") " pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.873255 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e149390c-e4da-4dfd-bed2-b14de058f921-rootfs\") pod \"machine-config-daemon-v72nq\" (UID: \"e149390c-e4da-4dfd-bed2-b14de058f921\") " pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.873275 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e149390c-e4da-4dfd-bed2-b14de058f921-proxy-tls\") pod \"machine-config-daemon-v72nq\" (UID: \"e149390c-e4da-4dfd-bed2-b14de058f921\") " pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.951085 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:45Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.980576 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vprvz\" (UniqueName: \"kubernetes.io/projected/e149390c-e4da-4dfd-bed2-b14de058f921-kube-api-access-vprvz\") pod \"machine-config-daemon-v72nq\" (UID: \"e149390c-e4da-4dfd-bed2-b14de058f921\") " pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.982763 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e149390c-e4da-4dfd-bed2-b14de058f921-proxy-tls\") pod \"machine-config-daemon-v72nq\" (UID: \"e149390c-e4da-4dfd-bed2-b14de058f921\") " pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.982851 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e149390c-e4da-4dfd-bed2-b14de058f921-mcd-auth-proxy-config\") pod \"machine-config-daemon-v72nq\" (UID: \"e149390c-e4da-4dfd-bed2-b14de058f921\") " pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.982931 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e149390c-e4da-4dfd-bed2-b14de058f921-rootfs\") pod \"machine-config-daemon-v72nq\" (UID: \"e149390c-e4da-4dfd-bed2-b14de058f921\") " pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.983091 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e149390c-e4da-4dfd-bed2-b14de058f921-rootfs\") pod \"machine-config-daemon-v72nq\" (UID: \"e149390c-e4da-4dfd-bed2-b14de058f921\") " pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.983392 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.983448 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.983463 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.983487 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.983510 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:45Z","lastTransitionTime":"2026-01-21T13:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.985044 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e149390c-e4da-4dfd-bed2-b14de058f921-mcd-auth-proxy-config\") pod \"machine-config-daemon-v72nq\" (UID: \"e149390c-e4da-4dfd-bed2-b14de058f921\") " pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:02:45 crc kubenswrapper[4765]: I0121 13:02:45.994341 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e149390c-e4da-4dfd-bed2-b14de058f921-proxy-tls\") pod \"machine-config-daemon-v72nq\" (UID: \"e149390c-e4da-4dfd-bed2-b14de058f921\") " pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.030103 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vprvz\" (UniqueName: \"kubernetes.io/projected/e149390c-e4da-4dfd-bed2-b14de058f921-kube-api-access-vprvz\") pod \"machine-config-daemon-v72nq\" (UID: \"e149390c-e4da-4dfd-bed2-b14de058f921\") " pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.075390 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:46Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.090146 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:46Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.095273 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.095313 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.095322 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.095336 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.095347 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:46Z","lastTransitionTime":"2026-01-21T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.112108 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:46Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.134358 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:46Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.143421 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:02:46 crc kubenswrapper[4765]: W0121 13:02:46.155185 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode149390c_e4da_4dfd_bed2_b14de058f921.slice/crio-b88f03f091130b201ec43e7c6bd5adb62cfe5e93aa16396e8066f43328777de8 WatchSource:0}: Error finding container b88f03f091130b201ec43e7c6bd5adb62cfe5e93aa16396e8066f43328777de8: Status 404 returned error can't find the container with id b88f03f091130b201ec43e7c6bd5adb62cfe5e93aa16396e8066f43328777de8 Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.176265 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:46Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.197631 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.197661 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.197675 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.197701 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.197713 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:46Z","lastTransitionTime":"2026-01-21T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.211172 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:46Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.243615 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:46Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.262749 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:46Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.300716 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.300776 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.300789 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.300813 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.300828 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:46Z","lastTransitionTime":"2026-01-21T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.329291 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:46Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.353674 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:46Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.403597 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.403636 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.403645 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.403676 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.403686 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:46Z","lastTransitionTime":"2026-01-21T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.410070 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:46Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.446756 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-w5x22" event={"ID":"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef","Type":"ContainerStarted","Data":"a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af"} Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.449385 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gmkg6" event={"ID":"1d638a9b-eb82-48af-bf7a-dbfc68b5c931","Type":"ContainerStarted","Data":"68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02"} Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.461879 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae"} Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.461967 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"b88f03f091130b201ec43e7c6bd5adb62cfe5e93aa16396e8066f43328777de8"} Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.464398 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289"} Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.492112 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:46Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.506464 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.506521 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.506530 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.506566 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.506576 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:46Z","lastTransitionTime":"2026-01-21T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.510786 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:17:04.510674258 +0000 UTC Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.538138 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-z68f6"] Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.539027 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.540674 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-bplfq"] Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.541161 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.554701 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x677d"] Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.555619 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.558288 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.559932 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:46Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:46 crc kubenswrapper[4765]: W0121 13:02:46.560117 4765 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 13:02:46 crc kubenswrapper[4765]: E0121 13:02:46.560148 4765 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.560828 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.561076 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 13:02:46 crc kubenswrapper[4765]: W0121 13:02:46.561314 4765 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 13:02:46 crc kubenswrapper[4765]: E0121 13:02:46.561335 4765 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:02:46 crc kubenswrapper[4765]: W0121 13:02:46.561411 4765 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 13:02:46 crc kubenswrapper[4765]: E0121 13:02:46.561424 4765 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:02:46 crc kubenswrapper[4765]: W0121 13:02:46.561474 4765 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 13:02:46 crc kubenswrapper[4765]: E0121 13:02:46.561488 4765 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.562127 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 13:02:46 crc kubenswrapper[4765]: W0121 13:02:46.563391 4765 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 13:02:46 crc kubenswrapper[4765]: E0121 13:02:46.563413 4765 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.579795 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.581434 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.640443 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.640442 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:46 crc kubenswrapper[4765]: E0121 13:02:46.640593 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:02:46 crc kubenswrapper[4765]: E0121 13:02:46.640743 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.640823 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22f3d99e-f58c-4caa-be45-b879c6b614d3-cni-binary-copy\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.640850 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-kubelet\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.640870 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-etc-openvswitch\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.640949 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-node-log\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641093 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22f3d99e-f58c-4caa-be45-b879c6b614d3-os-release\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641151 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22f3d99e-f58c-4caa-be45-b879c6b614d3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641173 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-var-lib-cni-multus\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641253 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-log-socket\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641325 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-var-lib-cni-bin\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641385 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-var-lib-openvswitch\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641417 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22f3d99e-f58c-4caa-be45-b879c6b614d3-system-cni-dir\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641450 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-cni-bin\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641471 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22f3d99e-f58c-4caa-be45-b879c6b614d3-cnibin\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641497 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-multus-socket-dir-parent\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641536 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs4dl\" (UniqueName: \"kubernetes.io/projected/d9b9a5be-6b15-46d2-8715-506efdae8ae7-kube-api-access-bs4dl\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641558 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-multus-cni-dir\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641580 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-var-lib-kubelet\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641633 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-config\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641676 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9t46\" (UniqueName: \"kubernetes.io/projected/cd80c14d-ebec-4d65-8116-149400d6f8be-kube-api-access-q9t46\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641726 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-multus-conf-dir\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641763 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-cni-netd\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641787 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d9b9a5be-6b15-46d2-8715-506efdae8ae7-multus-daemon-config\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641815 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-etc-kubernetes\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641841 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xxkp\" (UniqueName: \"kubernetes.io/projected/22f3d99e-f58c-4caa-be45-b879c6b614d3-kube-api-access-4xxkp\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641867 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-os-release\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641913 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-openvswitch\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641941 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-run-ovn-kubernetes\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.641973 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.642001 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-script-lib\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.642026 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d9b9a5be-6b15-46d2-8715-506efdae8ae7-cni-binary-copy\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.642052 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-slash\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.642071 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-ovn\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.642094 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-env-overrides\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.642119 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd80c14d-ebec-4d65-8116-149400d6f8be-ovn-node-metrics-cert\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.642175 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-hostroot\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.642199 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-systemd\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.642251 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22f3d99e-f58c-4caa-be45-b879c6b614d3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.642612 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-system-cni-dir\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.642686 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-cnibin\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.642733 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-run-netns\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.642785 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-run-k8s-cni-cncf-io\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.642854 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-systemd-units\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.642968 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-run-netns\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.643006 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-run-multus-certs\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.643983 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 13:02:46 crc kubenswrapper[4765]: W0121 13:02:46.644002 4765 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 13:02:46 crc kubenswrapper[4765]: E0121 13:02:46.644104 4765 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.643983 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.644177 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.644193 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.644227 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.644240 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:46Z","lastTransitionTime":"2026-01-21T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:46 crc kubenswrapper[4765]: W0121 13:02:46.657339 4765 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 13:02:46 crc kubenswrapper[4765]: E0121 13:02:46.657395 4765 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744298 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-multus-conf-dir\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744370 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-cni-netd\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744407 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d9b9a5be-6b15-46d2-8715-506efdae8ae7-multus-daemon-config\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744439 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-etc-kubernetes\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744466 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-openvswitch\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744501 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xxkp\" (UniqueName: \"kubernetes.io/projected/22f3d99e-f58c-4caa-be45-b879c6b614d3-kube-api-access-4xxkp\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744536 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-os-release\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744565 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-run-ovn-kubernetes\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744588 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744610 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-script-lib\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744628 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d9b9a5be-6b15-46d2-8715-506efdae8ae7-cni-binary-copy\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744647 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-ovn\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744676 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-env-overrides\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744713 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd80c14d-ebec-4d65-8116-149400d6f8be-ovn-node-metrics-cert\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744753 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-slash\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744786 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-hostroot\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744817 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22f3d99e-f58c-4caa-be45-b879c6b614d3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744848 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-system-cni-dir\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744870 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-systemd\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744907 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-cnibin\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744936 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-run-netns\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.744994 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-systemd-units\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745034 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-run-netns\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745062 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-run-k8s-cni-cncf-io\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745086 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-run-multus-certs\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745133 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-kubelet\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745163 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-etc-openvswitch\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745196 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-node-log\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745253 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22f3d99e-f58c-4caa-be45-b879c6b614d3-cni-binary-copy\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745283 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-var-lib-cni-multus\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745318 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22f3d99e-f58c-4caa-be45-b879c6b614d3-os-release\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745346 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22f3d99e-f58c-4caa-be45-b879c6b614d3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745376 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-var-lib-cni-bin\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745404 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-log-socket\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745429 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-var-lib-openvswitch\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745455 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22f3d99e-f58c-4caa-be45-b879c6b614d3-system-cni-dir\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745478 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-multus-socket-dir-parent\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745500 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs4dl\" (UniqueName: \"kubernetes.io/projected/d9b9a5be-6b15-46d2-8715-506efdae8ae7-kube-api-access-bs4dl\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745526 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-cni-bin\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745553 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22f3d99e-f58c-4caa-be45-b879c6b614d3-cnibin\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745578 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-config\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745603 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9t46\" (UniqueName: \"kubernetes.io/projected/cd80c14d-ebec-4d65-8116-149400d6f8be-kube-api-access-q9t46\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745626 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-multus-cni-dir\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745649 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-var-lib-kubelet\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745885 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-multus-conf-dir\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.745944 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-cni-netd\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.746790 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d9b9a5be-6b15-46d2-8715-506efdae8ae7-multus-daemon-config\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.746848 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-etc-kubernetes\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.746885 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-openvswitch\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.747278 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-os-release\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.747317 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-run-ovn-kubernetes\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.747345 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.747900 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d9b9a5be-6b15-46d2-8715-506efdae8ae7-cni-binary-copy\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.747946 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-ovn\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.748136 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-slash\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.748178 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-hostroot\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.749104 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22f3d99e-f58c-4caa-be45-b879c6b614d3-os-release\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.749270 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-system-cni-dir\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.749311 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-systemd\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.749356 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-cnibin\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.749387 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-run-netns\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.749422 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-systemd-units\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.749458 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-run-netns\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.749491 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-run-k8s-cni-cncf-io\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.749524 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-run-multus-certs\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.749561 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-kubelet\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.749593 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-etc-openvswitch\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.749627 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-node-log\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.750281 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22f3d99e-f58c-4caa-be45-b879c6b614d3-cni-binary-copy\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.750338 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-var-lib-cni-multus\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.750461 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.750483 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.750495 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.750513 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.750525 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:46Z","lastTransitionTime":"2026-01-21T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.751136 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-var-lib-openvswitch\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.751183 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-cni-bin\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.751196 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22f3d99e-f58c-4caa-be45-b879c6b614d3-cnibin\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.751266 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-log-socket\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.751287 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-var-lib-cni-bin\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.751324 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22f3d99e-f58c-4caa-be45-b879c6b614d3-system-cni-dir\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.751445 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-multus-socket-dir-parent\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.751461 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-multus-cni-dir\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.751493 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d9b9a5be-6b15-46d2-8715-506efdae8ae7-host-var-lib-kubelet\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.755357 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22f3d99e-f58c-4caa-be45-b879c6b614d3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.755814 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22f3d99e-f58c-4caa-be45-b879c6b614d3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.787145 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:46Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.847959 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs4dl\" (UniqueName: \"kubernetes.io/projected/d9b9a5be-6b15-46d2-8715-506efdae8ae7-kube-api-access-bs4dl\") pod \"multus-bplfq\" (UID: \"d9b9a5be-6b15-46d2-8715-506efdae8ae7\") " pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.851470 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xxkp\" (UniqueName: \"kubernetes.io/projected/22f3d99e-f58c-4caa-be45-b879c6b614d3-kube-api-access-4xxkp\") pod \"multus-additional-cni-plugins-z68f6\" (UID: \"22f3d99e-f58c-4caa-be45-b879c6b614d3\") " pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.853107 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.853139 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.853152 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.853169 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.853184 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:46Z","lastTransitionTime":"2026-01-21T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.857451 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z68f6" Jan 21 13:02:46 crc kubenswrapper[4765]: W0121 13:02:46.892608 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22f3d99e_f58c_4caa_be45_b879c6b614d3.slice/crio-e84810f0f2d1fcdaec39802705989fce9f8254bd9e2b60f351988c4d9254c5dd WatchSource:0}: Error finding container e84810f0f2d1fcdaec39802705989fce9f8254bd9e2b60f351988c4d9254c5dd: Status 404 returned error can't find the container with id e84810f0f2d1fcdaec39802705989fce9f8254bd9e2b60f351988c4d9254c5dd Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.893284 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:46Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.957255 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bplfq" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.958385 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.958488 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.958543 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.958603 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:46 crc kubenswrapper[4765]: I0121 13:02:46.958686 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:46Z","lastTransitionTime":"2026-01-21T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.053581 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.069803 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.069858 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.069876 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.069904 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.069939 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:47Z","lastTransitionTime":"2026-01-21T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.178288 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.178322 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.178333 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.178350 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.178362 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:47Z","lastTransitionTime":"2026-01-21T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.236153 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.267907 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.289700 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.289761 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.289776 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.289798 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.289811 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:47Z","lastTransitionTime":"2026-01-21T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.320174 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.364219 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.373793 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.383691 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd80c14d-ebec-4d65-8116-149400d6f8be-ovn-node-metrics-cert\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.392426 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.392705 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.392792 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.392895 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.392976 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:47Z","lastTransitionTime":"2026-01-21T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.394620 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.413739 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.429321 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.443190 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.469318 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5"} Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.471079 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bplfq" event={"ID":"d9b9a5be-6b15-46d2-8715-506efdae8ae7","Type":"ContainerStarted","Data":"b65ac709999274cfed114d98e01b285cf850a42517195100448ca0da177cbc80"} Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.472268 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" event={"ID":"22f3d99e-f58c-4caa-be45-b879c6b614d3","Type":"ContainerStarted","Data":"e84810f0f2d1fcdaec39802705989fce9f8254bd9e2b60f351988c4d9254c5dd"} Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.495525 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.495564 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.495573 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.495588 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.495599 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:47Z","lastTransitionTime":"2026-01-21T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.495843 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.511723 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 21:51:15.302619083 +0000 UTC Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.513554 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.544632 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.570241 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.598967 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.598997 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.599010 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.599025 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.599036 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:47Z","lastTransitionTime":"2026-01-21T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.638129 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:47 crc kubenswrapper[4765]: E0121 13:02:47.638311 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.649277 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.666891 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.673842 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.689901 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.737384 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.737708 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.737808 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.737899 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.737976 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:47Z","lastTransitionTime":"2026-01-21T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:47 crc kubenswrapper[4765]: E0121 13:02:47.779611 4765 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:02:47 crc kubenswrapper[4765]: E0121 13:02:47.779627 4765 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:02:47 crc kubenswrapper[4765]: E0121 13:02:47.779760 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-config podName:cd80c14d-ebec-4d65-8116-149400d6f8be nodeName:}" failed. No retries permitted until 2026-01-21 13:02:48.279727607 +0000 UTC m=+29.297453429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-config") pod "ovnkube-node-x677d" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be") : failed to sync configmap cache: timed out waiting for the condition Jan 21 13:02:47 crc kubenswrapper[4765]: E0121 13:02:47.779748 4765 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:02:47 crc kubenswrapper[4765]: E0121 13:02:47.779928 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-env-overrides podName:cd80c14d-ebec-4d65-8116-149400d6f8be nodeName:}" failed. No retries permitted until 2026-01-21 13:02:48.279882881 +0000 UTC m=+29.297608703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-env-overrides") pod "ovnkube-node-x677d" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be") : failed to sync configmap cache: timed out waiting for the condition Jan 21 13:02:47 crc kubenswrapper[4765]: E0121 13:02:47.780010 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-script-lib podName:cd80c14d-ebec-4d65-8116-149400d6f8be nodeName:}" failed. No retries permitted until 2026-01-21 13:02:48.279975013 +0000 UTC m=+29.297700865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-script-lib") pod "ovnkube-node-x677d" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be") : failed to sync configmap cache: timed out waiting for the condition Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.785895 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.949833 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:47Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.953522 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.953553 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.953567 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.953591 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:47 crc kubenswrapper[4765]: I0121 13:02:47.953608 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:47Z","lastTransitionTime":"2026-01-21T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:48 crc kubenswrapper[4765]: E0121 13:02:48.011153 4765 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:02:48 crc kubenswrapper[4765]: E0121 13:02:48.011237 4765 projected.go:194] Error preparing data for projected volume kube-api-access-q9t46 for pod openshift-ovn-kubernetes/ovnkube-node-x677d: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:02:48 crc kubenswrapper[4765]: E0121 13:02:48.011335 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd80c14d-ebec-4d65-8116-149400d6f8be-kube-api-access-q9t46 podName:cd80c14d-ebec-4d65-8116-149400d6f8be nodeName:}" failed. No retries permitted until 2026-01-21 13:02:48.51129368 +0000 UTC m=+29.529019502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q9t46" (UniqueName: "kubernetes.io/projected/cd80c14d-ebec-4d65-8116-149400d6f8be-kube-api-access-q9t46") pod "ovnkube-node-x677d" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be") : failed to sync configmap cache: timed out waiting for the condition Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.014733 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.014753 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.019850 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.063577 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.063611 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.063620 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.063655 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.063674 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:48Z","lastTransitionTime":"2026-01-21T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.145906 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.147233 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.198476 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.202163 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.202231 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.202246 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.202265 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.202375 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:48Z","lastTransitionTime":"2026-01-21T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.211686 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.229686 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.245867 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.286789 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-config\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.286862 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-script-lib\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.286895 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-env-overrides\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.287735 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-env-overrides\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.287754 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-config\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.288285 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-script-lib\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.290969 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.310519 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.310562 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.310576 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.310595 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.310606 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:48Z","lastTransitionTime":"2026-01-21T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.324791 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.414543 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.414597 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.414608 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.414626 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.414640 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:48Z","lastTransitionTime":"2026-01-21T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.438708 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.458244 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.512848 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:04:14.641767547 +0000 UTC Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.575915 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.575960 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.575973 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.575992 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.576005 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:48Z","lastTransitionTime":"2026-01-21T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.641976 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bplfq" event={"ID":"d9b9a5be-6b15-46d2-8715-506efdae8ae7","Type":"ContainerStarted","Data":"9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce"} Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.642371 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:48 crc kubenswrapper[4765]: E0121 13:02:48.642547 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.642723 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:48 crc kubenswrapper[4765]: E0121 13:02:48.642846 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.643192 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9t46\" (UniqueName: \"kubernetes.io/projected/cd80c14d-ebec-4d65-8116-149400d6f8be-kube-api-access-q9t46\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.705689 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9t46\" (UniqueName: \"kubernetes.io/projected/cd80c14d-ebec-4d65-8116-149400d6f8be-kube-api-access-q9t46\") pod \"ovnkube-node-x677d\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.707238 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.707266 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.707278 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.707307 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.707328 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:48Z","lastTransitionTime":"2026-01-21T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.707232 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" event={"ID":"22f3d99e-f58c-4caa-be45-b879c6b614d3","Type":"ContainerStarted","Data":"bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c"} Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.782667 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:48Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.841375 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.853691 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.853764 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.853783 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.853808 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.853825 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:48Z","lastTransitionTime":"2026-01-21T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.998288 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.998355 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.998377 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.998408 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:48 crc kubenswrapper[4765]: I0121 13:02:48.998433 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:48Z","lastTransitionTime":"2026-01-21T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.076485 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.101123 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.101183 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.101197 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.101254 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.101271 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:49Z","lastTransitionTime":"2026-01-21T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.138687 4765 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.140177 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/node-ca-gmkg6/status\": read tcp 38.129.56.144:51950->38.129.56.144:6443: use of closed network connection" Jan 21 13:02:49 crc kubenswrapper[4765]: W0121 13:02:49.140953 4765 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 21 13:02:49 crc kubenswrapper[4765]: W0121 13:02:49.148745 4765 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.255980 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.256386 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.256395 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.256413 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.256424 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:49Z","lastTransitionTime":"2026-01-21T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.335513 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.352750 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.360014 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.360063 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.360074 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.360093 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.360109 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:49Z","lastTransitionTime":"2026-01-21T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.367254 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.416428 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:02:49 crc kubenswrapper[4765]: E0121 13:02:49.416526 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:02:57.416385047 +0000 UTC m=+38.434110869 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.416552 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.416581 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.416617 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.416646 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:49 crc kubenswrapper[4765]: E0121 13:02:49.416777 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:02:49 crc kubenswrapper[4765]: E0121 13:02:49.416794 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:02:49 crc kubenswrapper[4765]: E0121 13:02:49.416806 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:49 crc kubenswrapper[4765]: E0121 13:02:49.416846 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:57.416837719 +0000 UTC m=+38.434563541 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:49 crc kubenswrapper[4765]: E0121 13:02:49.416900 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:02:49 crc kubenswrapper[4765]: E0121 13:02:49.416911 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:02:49 crc kubenswrapper[4765]: E0121 13:02:49.416918 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:49 crc kubenswrapper[4765]: E0121 13:02:49.416946 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:57.416934872 +0000 UTC m=+38.434660694 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:49 crc kubenswrapper[4765]: E0121 13:02:49.417117 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:02:49 crc kubenswrapper[4765]: E0121 13:02:49.417149 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:57.417137668 +0000 UTC m=+38.434863490 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:02:49 crc kubenswrapper[4765]: E0121 13:02:49.417198 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:02:49 crc kubenswrapper[4765]: E0121 13:02:49.417239 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:02:57.41723097 +0000 UTC m=+38.434956792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.455176 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.462394 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.462657 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.462742 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.462824 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.462945 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:49Z","lastTransitionTime":"2026-01-21T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.475576 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.504971 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.630955 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:30:36.431094787 +0000 UTC Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.641840 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:49 crc kubenswrapper[4765]: E0121 13:02:49.642127 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.789654 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.789776 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.790294 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.790372 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.790438 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.790497 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:49Z","lastTransitionTime":"2026-01-21T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.805255 4765 generic.go:334] "Generic (PLEG): container finished" podID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerID="5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154" exitCode=0 Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.805819 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerDied","Data":"5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154"} Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.805856 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerStarted","Data":"96dc92bd7b3deceb8264fd2e0ed1448add4eb9487a80efec115495823ae95818"} Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.929958 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.930006 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.930020 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.930038 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.930053 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:49Z","lastTransitionTime":"2026-01-21T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.954633 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:49 crc kubenswrapper[4765]: I0121 13:02:49.988296 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.023021 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.055780 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.056134 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.056235 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.056367 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.056466 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:50Z","lastTransitionTime":"2026-01-21T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.061166 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.078222 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.136779 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.155060 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.169962 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.175600 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.175669 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.175681 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.175720 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.175732 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:50Z","lastTransitionTime":"2026-01-21T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.184379 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.206534 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.234070 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.259993 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.276098 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.278480 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.278538 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.278549 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.278570 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.278579 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:50Z","lastTransitionTime":"2026-01-21T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.292223 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.304549 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.321809 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.358189 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.374587 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.380979 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.381013 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.381023 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.381039 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.381051 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:50Z","lastTransitionTime":"2026-01-21T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.397929 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.413861 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.424871 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.440435 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.459693 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.483688 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.483750 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.483765 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.483786 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.483807 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:50Z","lastTransitionTime":"2026-01-21T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.488327 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.515677 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.535062 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.556519 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.578676 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.586121 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.586161 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.586170 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.586185 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.586196 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:50Z","lastTransitionTime":"2026-01-21T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.596071 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.613076 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:50 crc kubenswrapper[4765]: E0121 13:02:50.613265 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.613744 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:50 crc kubenswrapper[4765]: E0121 13:02:50.613804 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.613843 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.631813 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:20:59.014011993 +0000 UTC Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.642492 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.688519 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.688631 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.688649 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.688673 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.688692 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:50Z","lastTransitionTime":"2026-01-21T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.708499 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.763042 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.854280 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.854311 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.854319 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.854333 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.854343 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:50Z","lastTransitionTime":"2026-01-21T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.857970 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerStarted","Data":"9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083"} Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.859141 4765 generic.go:334] "Generic (PLEG): container finished" podID="22f3d99e-f58c-4caa-be45-b879c6b614d3" containerID="bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c" exitCode=0 Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.859175 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" event={"ID":"22f3d99e-f58c-4caa-be45-b879c6b614d3","Type":"ContainerDied","Data":"bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c"} Jan 21 13:02:50 crc kubenswrapper[4765]: I0121 13:02:50.936888 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:50Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.094771 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.094816 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.094828 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.094845 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.094855 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:51Z","lastTransitionTime":"2026-01-21T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.178269 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.235425 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.235472 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.235483 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.235502 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.235523 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:51Z","lastTransitionTime":"2026-01-21T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.368942 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.368987 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.368997 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.369016 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.369027 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:51Z","lastTransitionTime":"2026-01-21T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.440041 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.471929 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.472388 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.472789 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.472957 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.473078 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:51Z","lastTransitionTime":"2026-01-21T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.477870 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.497410 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.540677 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.560127 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.560524 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.560593 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.560661 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.560721 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:51Z","lastTransitionTime":"2026-01-21T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.566462 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.613400 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:51 crc kubenswrapper[4765]: E0121 13:02:51.613892 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.632798 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 07:32:00.619938372 +0000 UTC Jan 21 13:02:51 crc kubenswrapper[4765]: E0121 13:02:51.633288 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.633682 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.642753 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.642785 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.642794 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.642808 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.642819 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:51Z","lastTransitionTime":"2026-01-21T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.659289 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: E0121 13:02:51.661117 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.667722 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.667991 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.668090 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.668186 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.668296 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:51Z","lastTransitionTime":"2026-01-21T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.673841 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: E0121 13:02:51.683179 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.688095 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.688151 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.688163 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.688178 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.688189 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:51Z","lastTransitionTime":"2026-01-21T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.691383 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: E0121 13:02:51.702602 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.708153 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.708200 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.708224 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.708242 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.708254 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:51Z","lastTransitionTime":"2026-01-21T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.710983 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: E0121 13:02:51.721368 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: E0121 13:02:51.721591 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.724416 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.724470 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.724486 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.724508 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.724521 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:51Z","lastTransitionTime":"2026-01-21T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.731282 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.746850 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.762579 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.778397 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:51Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.827860 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.827925 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.827944 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.827975 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.827990 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:51Z","lastTransitionTime":"2026-01-21T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.866108 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerStarted","Data":"b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741"} Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.930851 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.931187 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.931298 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.931423 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:51 crc kubenswrapper[4765]: I0121 13:02:51.931529 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:51Z","lastTransitionTime":"2026-01-21T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.041250 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.041331 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.041349 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.041380 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.041398 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:52Z","lastTransitionTime":"2026-01-21T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.144853 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.144900 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.144910 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.144931 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.144948 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:52Z","lastTransitionTime":"2026-01-21T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.235105 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.249947 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.250001 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.250011 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.250030 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.250041 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:52Z","lastTransitionTime":"2026-01-21T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.353880 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.353971 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.354029 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.354099 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.354127 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:52Z","lastTransitionTime":"2026-01-21T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.457140 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.457185 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.457201 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.457241 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.457253 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:52Z","lastTransitionTime":"2026-01-21T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.560387 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.560441 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.560453 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.560470 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.560482 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:52Z","lastTransitionTime":"2026-01-21T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.613179 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.613259 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:52 crc kubenswrapper[4765]: E0121 13:02:52.613427 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:02:52 crc kubenswrapper[4765]: E0121 13:02:52.613941 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.661185 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:09:25.494936758 +0000 UTC Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.730057 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.730345 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.730426 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.730510 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.730574 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:52Z","lastTransitionTime":"2026-01-21T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.852398 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.852664 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.852772 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.852876 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.852974 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:52Z","lastTransitionTime":"2026-01-21T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.901555 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" event={"ID":"22f3d99e-f58c-4caa-be45-b879c6b614d3","Type":"ContainerStarted","Data":"a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce"} Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.906553 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerStarted","Data":"bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e"} Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.922013 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:52Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.937872 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:52Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.956440 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.956486 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.956496 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.956525 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.956540 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:52Z","lastTransitionTime":"2026-01-21T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.968915 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:52Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:52 crc kubenswrapper[4765]: I0121 13:02:52.987199 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:52Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.002086 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:52Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.015751 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:53Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.030383 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:53Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.046078 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:53Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.057558 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:53Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.059997 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.060101 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.060117 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.060138 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.060156 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:53Z","lastTransitionTime":"2026-01-21T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.068983 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:53Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.081192 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:53Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.094832 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:53Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.112385 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:53Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.132335 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:53Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.164244 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.164287 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.164299 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.164319 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.164332 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:53Z","lastTransitionTime":"2026-01-21T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.267961 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.268341 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.268665 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.268808 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.268937 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:53Z","lastTransitionTime":"2026-01-21T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.372420 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.372520 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.372553 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.372600 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.372639 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:53Z","lastTransitionTime":"2026-01-21T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.475599 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.475638 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.475648 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.475686 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.475702 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:53Z","lastTransitionTime":"2026-01-21T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.579138 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.579190 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.579201 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.579237 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.579394 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:53Z","lastTransitionTime":"2026-01-21T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.612809 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:53 crc kubenswrapper[4765]: E0121 13:02:53.612995 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.663015 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 09:49:19.910764298 +0000 UTC Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.683381 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.683472 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.683498 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.683530 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.683561 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:53Z","lastTransitionTime":"2026-01-21T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.786464 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.786507 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.786516 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.786538 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.786549 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:53Z","lastTransitionTime":"2026-01-21T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.889597 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.889640 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.889652 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.889668 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.889681 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:53Z","lastTransitionTime":"2026-01-21T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.992739 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.992787 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.992800 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.992817 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:53 crc kubenswrapper[4765]: I0121 13:02:53.992829 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:53Z","lastTransitionTime":"2026-01-21T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.096409 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.096774 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.096852 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.096968 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.097054 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:54Z","lastTransitionTime":"2026-01-21T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.220603 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.221158 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.221174 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.221200 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.221243 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:54Z","lastTransitionTime":"2026-01-21T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.323468 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.323521 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.323536 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.323556 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.323572 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:54Z","lastTransitionTime":"2026-01-21T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.344329 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.435880 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.435928 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.435938 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.435956 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.435970 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:54Z","lastTransitionTime":"2026-01-21T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.489386 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:54Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.581522 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:54Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.583270 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.583301 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.583312 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.583329 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.583339 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:54Z","lastTransitionTime":"2026-01-21T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.631996 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.632081 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:54 crc kubenswrapper[4765]: E0121 13:02:54.632152 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:02:54 crc kubenswrapper[4765]: E0121 13:02:54.632297 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.639957 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:54Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.664082 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 06:30:52.220050519 +0000 UTC Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.674304 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:54Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.754133 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.754190 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.754202 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.754242 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.754256 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:54Z","lastTransitionTime":"2026-01-21T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.759554 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:54Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.815616 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:54Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.851458 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:54Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.857306 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.857340 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.857353 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.857372 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.857383 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:54Z","lastTransitionTime":"2026-01-21T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.865806 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:54Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.879303 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:54Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.892476 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:54Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.914394 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:54Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:54 crc kubenswrapper[4765]: I0121 13:02:54.998965 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:54Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:54.999116 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:54.999146 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:54.999157 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:54.999173 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:54.999184 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:54Z","lastTransitionTime":"2026-01-21T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.006834 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerStarted","Data":"040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c"} Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.007372 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerStarted","Data":"7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22"} Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.015273 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:55Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.027126 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:55Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.107825 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.107866 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.107878 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.107896 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.107908 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:55Z","lastTransitionTime":"2026-01-21T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.210222 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.210258 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.210267 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.210282 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.210295 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:55Z","lastTransitionTime":"2026-01-21T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.312501 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.312542 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.312554 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.312573 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.312586 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:55Z","lastTransitionTime":"2026-01-21T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.415613 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.415670 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.415678 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.415711 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.415722 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:55Z","lastTransitionTime":"2026-01-21T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.579266 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.579299 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.579308 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.579322 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.579331 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:55Z","lastTransitionTime":"2026-01-21T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.614165 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:55 crc kubenswrapper[4765]: E0121 13:02:55.614366 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.664337 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:31:44.044168168 +0000 UTC Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.684833 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.684899 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.684920 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.684948 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.684970 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:55Z","lastTransitionTime":"2026-01-21T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.807787 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.807858 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.807877 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.807902 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.807920 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:55Z","lastTransitionTime":"2026-01-21T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.910165 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.910438 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.910453 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.910472 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:55 crc kubenswrapper[4765]: I0121 13:02:55.910484 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:55Z","lastTransitionTime":"2026-01-21T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.013297 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.013336 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.013345 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.013359 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.013369 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:56Z","lastTransitionTime":"2026-01-21T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.014827 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerStarted","Data":"2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d"} Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.017026 4765 generic.go:334] "Generic (PLEG): container finished" podID="22f3d99e-f58c-4caa-be45-b879c6b614d3" containerID="a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce" exitCode=0 Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.017078 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" event={"ID":"22f3d99e-f58c-4caa-be45-b879c6b614d3","Type":"ContainerDied","Data":"a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce"} Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.035086 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:56Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.053129 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:56Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.077869 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:56Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.095824 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:56Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.107562 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:56Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.116404 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.116441 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.116452 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.116469 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.116481 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:56Z","lastTransitionTime":"2026-01-21T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.121162 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:56Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.139623 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:56Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.156300 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:56Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.170468 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:56Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.190589 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:56Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.202429 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:56Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.220035 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.220080 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.220092 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.220111 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.220122 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:56Z","lastTransitionTime":"2026-01-21T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.220740 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:56Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.245234 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:56Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.265916 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:56Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.324761 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.324808 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.324824 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.324846 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.324864 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:56Z","lastTransitionTime":"2026-01-21T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.427694 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.427740 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.427754 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.427772 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.427787 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:56Z","lastTransitionTime":"2026-01-21T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.530188 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.530256 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.530271 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.530289 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.530301 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:56Z","lastTransitionTime":"2026-01-21T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.612958 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:56 crc kubenswrapper[4765]: E0121 13:02:56.613160 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.612958 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:56 crc kubenswrapper[4765]: E0121 13:02:56.613834 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.634541 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.634579 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.634590 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.634605 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.634616 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:56Z","lastTransitionTime":"2026-01-21T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.665523 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:10:58.619092357 +0000 UTC Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.737892 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.737934 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.737947 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.737963 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.737975 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:56Z","lastTransitionTime":"2026-01-21T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.840320 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.840717 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.840827 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.840925 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.841010 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:56Z","lastTransitionTime":"2026-01-21T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.944679 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.944735 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.944747 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.944769 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:56 crc kubenswrapper[4765]: I0121 13:02:56.944782 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:56Z","lastTransitionTime":"2026-01-21T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.022699 4765 generic.go:334] "Generic (PLEG): container finished" podID="22f3d99e-f58c-4caa-be45-b879c6b614d3" containerID="7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3" exitCode=0 Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.022748 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" event={"ID":"22f3d99e-f58c-4caa-be45-b879c6b614d3","Type":"ContainerDied","Data":"7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3"} Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.047048 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:57Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.048456 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.048499 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.048508 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.048530 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.048539 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:57Z","lastTransitionTime":"2026-01-21T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.070652 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:57Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.086132 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:57Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.105878 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:57Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.123393 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:57Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.141106 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:57Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.151382 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.151420 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.151433 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.151454 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.151469 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:57Z","lastTransitionTime":"2026-01-21T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.157635 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:57Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.196257 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:57Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.214417 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:57Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.231199 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:57Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.246976 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:57Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.255646 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.255690 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.255715 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.255733 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.255744 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:57Z","lastTransitionTime":"2026-01-21T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.263815 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:57Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.281468 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:57Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.296257 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:57Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.358125 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.358170 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.358181 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.358242 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.358254 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:57Z","lastTransitionTime":"2026-01-21T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.427888 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.428036 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.428077 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.428102 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.428130 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:57 crc kubenswrapper[4765]: E0121 13:02:57.428166 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:03:13.428135148 +0000 UTC m=+54.445860980 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:02:57 crc kubenswrapper[4765]: E0121 13:02:57.428315 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:02:57 crc kubenswrapper[4765]: E0121 13:02:57.428335 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:02:57 crc kubenswrapper[4765]: E0121 13:02:57.428336 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:02:57 crc kubenswrapper[4765]: E0121 13:02:57.428361 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:02:57 crc kubenswrapper[4765]: E0121 13:02:57.428377 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:57 crc kubenswrapper[4765]: E0121 13:02:57.428379 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:02:57 crc kubenswrapper[4765]: E0121 13:02:57.428347 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:57 crc kubenswrapper[4765]: E0121 13:02:57.428424 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:03:13.428414195 +0000 UTC m=+54.446140017 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:02:57 crc kubenswrapper[4765]: E0121 13:02:57.428440 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:03:13.428432066 +0000 UTC m=+54.446157888 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:57 crc kubenswrapper[4765]: E0121 13:02:57.428453 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:03:13.428446506 +0000 UTC m=+54.446172328 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:02:57 crc kubenswrapper[4765]: E0121 13:02:57.428505 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:02:57 crc kubenswrapper[4765]: E0121 13:02:57.428554 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:03:13.428538579 +0000 UTC m=+54.446264421 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.461221 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.461256 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.461270 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.461288 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.461301 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:57Z","lastTransitionTime":"2026-01-21T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.564708 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.564756 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.564766 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.564784 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.564794 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:57Z","lastTransitionTime":"2026-01-21T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.613051 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:57 crc kubenswrapper[4765]: E0121 13:02:57.613273 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.665713 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:49:01.207115137 +0000 UTC Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.667122 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.667151 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.667162 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.667180 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.667191 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:57Z","lastTransitionTime":"2026-01-21T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.775576 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.775654 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.775667 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.775688 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.775700 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:57Z","lastTransitionTime":"2026-01-21T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.879060 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.879574 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.879590 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.879611 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.879629 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:57Z","lastTransitionTime":"2026-01-21T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.982705 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.982734 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.982742 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.982756 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:57 crc kubenswrapper[4765]: I0121 13:02:57.982768 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:57Z","lastTransitionTime":"2026-01-21T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.034948 4765 generic.go:334] "Generic (PLEG): container finished" podID="22f3d99e-f58c-4caa-be45-b879c6b614d3" containerID="49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2" exitCode=0 Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.035052 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" event={"ID":"22f3d99e-f58c-4caa-be45-b879c6b614d3","Type":"ContainerDied","Data":"49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2"} Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.041691 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerStarted","Data":"244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8"} Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.066813 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.083834 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.086840 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.086897 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.086907 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.086926 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.086939 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:58Z","lastTransitionTime":"2026-01-21T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.106053 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.127109 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.142633 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.158369 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.180798 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.202042 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.204977 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.205037 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.205051 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.205071 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.205083 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:58Z","lastTransitionTime":"2026-01-21T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.219421 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.238790 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.252292 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.264449 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.279154 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.296163 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:58Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.308486 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.308536 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.308548 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.308570 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.308583 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:58Z","lastTransitionTime":"2026-01-21T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.411355 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.411404 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.411418 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.411438 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.411451 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:58Z","lastTransitionTime":"2026-01-21T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.513659 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.513692 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.513705 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.513722 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.513735 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:58Z","lastTransitionTime":"2026-01-21T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.613384 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.613424 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:02:58 crc kubenswrapper[4765]: E0121 13:02:58.613541 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:02:58 crc kubenswrapper[4765]: E0121 13:02:58.613650 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.615615 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.615657 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.615668 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.615683 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.615693 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:58Z","lastTransitionTime":"2026-01-21T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.666535 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:36:43.048067965 +0000 UTC Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.717856 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.717897 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.717915 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.717944 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.717956 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:58Z","lastTransitionTime":"2026-01-21T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.820891 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.820928 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.820938 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.820953 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.820961 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:58Z","lastTransitionTime":"2026-01-21T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.923902 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.923962 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.923974 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.923998 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:58 crc kubenswrapper[4765]: I0121 13:02:58.924014 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:58Z","lastTransitionTime":"2026-01-21T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.027047 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.027106 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.027120 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.027139 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.027160 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:59Z","lastTransitionTime":"2026-01-21T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.059674 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" event={"ID":"22f3d99e-f58c-4caa-be45-b879c6b614d3","Type":"ContainerStarted","Data":"12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b"} Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.069921 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerStarted","Data":"c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490"} Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.070848 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.070915 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.070941 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.081948 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.094642 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.110103 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.178436 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.181405 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.181438 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.181447 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.181462 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.181472 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:59Z","lastTransitionTime":"2026-01-21T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.181764 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.182565 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.193631 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.207919 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.275634 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.283363 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.283526 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.283582 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.283695 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.283772 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:59Z","lastTransitionTime":"2026-01-21T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.288376 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.299359 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.311559 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.325224 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.341166 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.357385 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.375664 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.385563 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.385596 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.385605 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.385624 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.385633 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:59Z","lastTransitionTime":"2026-01-21T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.388828 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.400683 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.414991 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.428808 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.446147 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.460861 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.484219 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.488261 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.488387 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.488467 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.488576 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.488673 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:59Z","lastTransitionTime":"2026-01-21T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.501939 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.522430 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.534256 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.547305 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.559865 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.572955 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.585691 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.591349 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.591479 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.591589 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.591718 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.591926 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:59Z","lastTransitionTime":"2026-01-21T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.613633 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:02:59 crc kubenswrapper[4765]: E0121 13:02:59.613878 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.633270 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.647051 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.655773 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.666337 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.667269 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 05:33:53.542824417 +0000 UTC Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.679420 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.692270 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.696531 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.696594 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.696609 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.696624 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.696944 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:59Z","lastTransitionTime":"2026-01-21T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.709236 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.730856 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.750370 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.766187 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.777228 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.793271 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.799628 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.799674 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.799715 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.799733 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.799745 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:59Z","lastTransitionTime":"2026-01-21T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.811615 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.828784 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:02:59Z is after 2025-08-24T17:21:41Z" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.902345 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.902380 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.902391 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.902404 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:02:59 crc kubenswrapper[4765]: I0121 13:02:59.902413 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:02:59Z","lastTransitionTime":"2026-01-21T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.005630 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.005709 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.005727 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.005762 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.005777 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:00Z","lastTransitionTime":"2026-01-21T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.108712 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.109254 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.109274 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.109292 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.109301 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:00Z","lastTransitionTime":"2026-01-21T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.213320 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.213364 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.213388 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.213419 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.213439 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:00Z","lastTransitionTime":"2026-01-21T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.316266 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.316294 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.316303 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.316317 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.316328 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:00Z","lastTransitionTime":"2026-01-21T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.419785 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.419815 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.419827 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.420084 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.420109 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:00Z","lastTransitionTime":"2026-01-21T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.522793 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.522841 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.522853 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.522871 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.522882 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:00Z","lastTransitionTime":"2026-01-21T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.612927 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:00 crc kubenswrapper[4765]: E0121 13:03:00.613095 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.614093 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:00 crc kubenswrapper[4765]: E0121 13:03:00.614168 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.625595 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.625633 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.625643 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.625660 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.625670 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:00Z","lastTransitionTime":"2026-01-21T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.668054 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:48:15.448420738 +0000 UTC Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.728302 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.728368 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.728393 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.728747 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.728962 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:00Z","lastTransitionTime":"2026-01-21T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.831721 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.831771 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.831788 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.831805 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.831816 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:00Z","lastTransitionTime":"2026-01-21T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.934272 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.934325 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.934338 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.934354 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:00 crc kubenswrapper[4765]: I0121 13:03:00.934366 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:00Z","lastTransitionTime":"2026-01-21T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.037272 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.037324 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.037337 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.037357 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.037368 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:01Z","lastTransitionTime":"2026-01-21T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.182394 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.182451 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.182461 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.182477 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.182487 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:01Z","lastTransitionTime":"2026-01-21T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.190073 4765 generic.go:334] "Generic (PLEG): container finished" podID="22f3d99e-f58c-4caa-be45-b879c6b614d3" containerID="12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b" exitCode=0 Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.190103 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" event={"ID":"22f3d99e-f58c-4caa-be45-b879c6b614d3","Type":"ContainerDied","Data":"12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b"} Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.206713 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.229720 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.243747 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.257481 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.269716 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.282398 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.284768 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.284800 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.284810 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.284826 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.284838 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:01Z","lastTransitionTime":"2026-01-21T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.296227 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.312291 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.330633 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.388179 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.388257 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.388271 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.388287 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.388298 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:01Z","lastTransitionTime":"2026-01-21T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.438819 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.452485 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.465279 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.479483 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.490467 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.490507 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.490519 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.490537 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.490552 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:01Z","lastTransitionTime":"2026-01-21T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.507464 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.570251 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9"] Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.570936 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.573154 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.573359 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.587667 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.592618 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.592663 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.592680 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.592697 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.592707 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:01Z","lastTransitionTime":"2026-01-21T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.603133 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.612013 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd4a2a03-192d-4335-b808-aa313f573870-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pvtm9\" (UID: \"cd4a2a03-192d-4335-b808-aa313f573870\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.612104 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd4a2a03-192d-4335-b808-aa313f573870-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pvtm9\" (UID: \"cd4a2a03-192d-4335-b808-aa313f573870\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.612142 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcr5w\" (UniqueName: \"kubernetes.io/projected/cd4a2a03-192d-4335-b808-aa313f573870-kube-api-access-qcr5w\") pod \"ovnkube-control-plane-749d76644c-pvtm9\" (UID: \"cd4a2a03-192d-4335-b808-aa313f573870\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.612177 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd4a2a03-192d-4335-b808-aa313f573870-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pvtm9\" (UID: \"cd4a2a03-192d-4335-b808-aa313f573870\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.612951 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:01 crc kubenswrapper[4765]: E0121 13:03:01.613101 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.617122 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.630044 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.646158 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.658080 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.668795 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:03:50.671449088 +0000 UTC Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.673253 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.688806 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.696735 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.696797 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.696809 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.696829 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.696842 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:01Z","lastTransitionTime":"2026-01-21T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.706555 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.712687 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd4a2a03-192d-4335-b808-aa313f573870-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pvtm9\" (UID: \"cd4a2a03-192d-4335-b808-aa313f573870\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.712732 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd4a2a03-192d-4335-b808-aa313f573870-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pvtm9\" (UID: \"cd4a2a03-192d-4335-b808-aa313f573870\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.712752 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcr5w\" (UniqueName: \"kubernetes.io/projected/cd4a2a03-192d-4335-b808-aa313f573870-kube-api-access-qcr5w\") pod \"ovnkube-control-plane-749d76644c-pvtm9\" (UID: \"cd4a2a03-192d-4335-b808-aa313f573870\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.712774 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd4a2a03-192d-4335-b808-aa313f573870-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pvtm9\" (UID: \"cd4a2a03-192d-4335-b808-aa313f573870\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.713428 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd4a2a03-192d-4335-b808-aa313f573870-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pvtm9\" (UID: \"cd4a2a03-192d-4335-b808-aa313f573870\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.713913 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd4a2a03-192d-4335-b808-aa313f573870-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pvtm9\" (UID: \"cd4a2a03-192d-4335-b808-aa313f573870\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.721986 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd4a2a03-192d-4335-b808-aa313f573870-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pvtm9\" (UID: \"cd4a2a03-192d-4335-b808-aa313f573870\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.723687 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.737413 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.742301 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcr5w\" (UniqueName: \"kubernetes.io/projected/cd4a2a03-192d-4335-b808-aa313f573870-kube-api-access-qcr5w\") pod \"ovnkube-control-plane-749d76644c-pvtm9\" (UID: \"cd4a2a03-192d-4335-b808-aa313f573870\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.754426 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.765058 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.781972 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.799925 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.800000 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.800015 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.800038 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.800051 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:01Z","lastTransitionTime":"2026-01-21T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.803110 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.867071 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.867137 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.867151 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.867175 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.867188 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:01Z","lastTransitionTime":"2026-01-21T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:01 crc kubenswrapper[4765]: E0121 13:03:01.880845 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.884831 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.884931 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.884943 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.884983 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.884994 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:01Z","lastTransitionTime":"2026-01-21T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.897003 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" Jan 21 13:03:01 crc kubenswrapper[4765]: E0121 13:03:01.901771 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.907455 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.907510 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.907522 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.907541 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.907555 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:01Z","lastTransitionTime":"2026-01-21T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:01 crc kubenswrapper[4765]: W0121 13:03:01.918626 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd4a2a03_192d_4335_b808_aa313f573870.slice/crio-9b22e2c1c9a76e19e7861145319ffe359a3eb4af984b9b34ec880fc5c3cdd6a4 WatchSource:0}: Error finding container 9b22e2c1c9a76e19e7861145319ffe359a3eb4af984b9b34ec880fc5c3cdd6a4: Status 404 returned error can't find the container with id 9b22e2c1c9a76e19e7861145319ffe359a3eb4af984b9b34ec880fc5c3cdd6a4 Jan 21 13:03:01 crc kubenswrapper[4765]: E0121 13:03:01.925003 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.928785 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.928846 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.928863 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.928883 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.928895 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:01Z","lastTransitionTime":"2026-01-21T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:01 crc kubenswrapper[4765]: E0121 13:03:01.942662 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.946860 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.946903 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.946921 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.946942 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.946960 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:01Z","lastTransitionTime":"2026-01-21T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:01 crc kubenswrapper[4765]: E0121 13:03:01.962198 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:01Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:01 crc kubenswrapper[4765]: E0121 13:03:01.962411 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.964298 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.964325 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.964336 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.964354 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:01 crc kubenswrapper[4765]: I0121 13:03:01.964371 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:01Z","lastTransitionTime":"2026-01-21T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.090836 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.090882 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.090892 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.090908 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.090918 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:02Z","lastTransitionTime":"2026-01-21T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.194360 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.194422 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.194432 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.194448 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.194458 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:02Z","lastTransitionTime":"2026-01-21T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.200516 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" event={"ID":"22f3d99e-f58c-4caa-be45-b879c6b614d3","Type":"ContainerStarted","Data":"0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5"} Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.201395 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" event={"ID":"cd4a2a03-192d-4335-b808-aa313f573870","Type":"ContainerStarted","Data":"9b22e2c1c9a76e19e7861145319ffe359a3eb4af984b9b34ec880fc5c3cdd6a4"} Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.298447 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.298515 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.298532 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.298558 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.298575 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:02Z","lastTransitionTime":"2026-01-21T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.401487 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.401530 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.401541 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.401557 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.401570 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:02Z","lastTransitionTime":"2026-01-21T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.504864 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.504924 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.505063 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.505135 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.505150 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:02Z","lastTransitionTime":"2026-01-21T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.608585 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.608650 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.608669 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.608697 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.608721 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:02Z","lastTransitionTime":"2026-01-21T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.613189 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:02 crc kubenswrapper[4765]: E0121 13:03:02.613357 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.613845 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:02 crc kubenswrapper[4765]: E0121 13:03:02.613913 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.669995 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:09:58.529926963 +0000 UTC Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.711961 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.712026 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.712041 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.712064 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.712085 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:02Z","lastTransitionTime":"2026-01-21T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.815491 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.815541 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.815553 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.815570 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.815581 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:02Z","lastTransitionTime":"2026-01-21T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.918507 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.918541 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.918548 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.918565 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:02 crc kubenswrapper[4765]: I0121 13:03:02.918575 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:02Z","lastTransitionTime":"2026-01-21T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.020931 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.020958 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.020967 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.020983 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.020993 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:03Z","lastTransitionTime":"2026-01-21T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.124184 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.124238 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.124247 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.124263 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.124273 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:03Z","lastTransitionTime":"2026-01-21T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.227113 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.227154 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.227170 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.227195 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.227230 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:03Z","lastTransitionTime":"2026-01-21T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.330650 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.330738 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.330758 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.330793 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.330813 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:03Z","lastTransitionTime":"2026-01-21T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.440260 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.440292 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.440305 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.440326 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.440337 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:03Z","lastTransitionTime":"2026-01-21T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.516097 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" event={"ID":"cd4a2a03-192d-4335-b808-aa313f573870","Type":"ContainerStarted","Data":"2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9"} Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.516164 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" event={"ID":"cd4a2a03-192d-4335-b808-aa313f573870","Type":"ContainerStarted","Data":"a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891"} Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.518850 4765 generic.go:334] "Generic (PLEG): container finished" podID="22f3d99e-f58c-4caa-be45-b879c6b614d3" containerID="0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5" exitCode=0 Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.518897 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" event={"ID":"22f3d99e-f58c-4caa-be45-b879c6b614d3","Type":"ContainerDied","Data":"0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5"} Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.546937 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.546988 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.547000 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.547022 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.547034 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:03Z","lastTransitionTime":"2026-01-21T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.570481 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.588693 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.600624 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.613793 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:03 crc kubenswrapper[4765]: E0121 13:03:03.613959 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.627010 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.670616 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:30:51.302363127 +0000 UTC Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.680783 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.740195 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.740292 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.740314 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.740341 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.740363 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:03Z","lastTransitionTime":"2026-01-21T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.748515 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.768352 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.784576 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.798297 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.810941 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.824691 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.843489 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.843540 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.843553 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.843579 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.843593 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:03Z","lastTransitionTime":"2026-01-21T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.848137 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.871427 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.890413 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.907882 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.947653 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.947706 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.947717 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.947735 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.947750 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:03Z","lastTransitionTime":"2026-01-21T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:03 crc kubenswrapper[4765]: I0121 13:03:03.985483 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:03Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.004024 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.027504 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.041930 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.050796 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.051045 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.051202 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.051371 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.051486 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:04Z","lastTransitionTime":"2026-01-21T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.057463 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.071770 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.088960 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.109929 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.125263 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.141378 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.156802 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.157109 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.157184 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.157262 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.157323 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:04Z","lastTransitionTime":"2026-01-21T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.259994 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.260440 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.263486 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.263598 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.263622 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:04Z","lastTransitionTime":"2026-01-21T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.290289 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.366175 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.366223 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.366234 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.366250 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.366263 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:04Z","lastTransitionTime":"2026-01-21T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.376289 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.378617 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-4t7jw"] Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.379078 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:04 crc kubenswrapper[4765]: E0121 13:03:04.379135 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.395581 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.422156 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.439142 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs\") pod \"network-metrics-daemon-4t7jw\" (UID: \"d8dea79f-de5c-4034-9742-c322b723a59c\") " pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.439583 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzchv\" (UniqueName: \"kubernetes.io/projected/d8dea79f-de5c-4034-9742-c322b723a59c-kube-api-access-rzchv\") pod \"network-metrics-daemon-4t7jw\" (UID: \"d8dea79f-de5c-4034-9742-c322b723a59c\") " pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.446787 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.466664 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.468997 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.469025 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.469035 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.469052 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.469063 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:04Z","lastTransitionTime":"2026-01-21T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.576147 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.577766 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzchv\" (UniqueName: \"kubernetes.io/projected/d8dea79f-de5c-4034-9742-c322b723a59c-kube-api-access-rzchv\") pod \"network-metrics-daemon-4t7jw\" (UID: \"d8dea79f-de5c-4034-9742-c322b723a59c\") " pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.577816 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs\") pod \"network-metrics-daemon-4t7jw\" (UID: \"d8dea79f-de5c-4034-9742-c322b723a59c\") " pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:04 crc kubenswrapper[4765]: E0121 13:03:04.577944 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:03:04 crc kubenswrapper[4765]: E0121 13:03:04.578000 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs podName:d8dea79f-de5c-4034-9742-c322b723a59c nodeName:}" failed. No retries permitted until 2026-01-21 13:03:05.077981639 +0000 UTC m=+46.095707461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs") pod "network-metrics-daemon-4t7jw" (UID: "d8dea79f-de5c-4034-9742-c322b723a59c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.579067 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.579104 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.579115 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.579133 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.579145 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:04Z","lastTransitionTime":"2026-01-21T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.594808 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.602095 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzchv\" (UniqueName: \"kubernetes.io/projected/d8dea79f-de5c-4034-9742-c322b723a59c-kube-api-access-rzchv\") pod \"network-metrics-daemon-4t7jw\" (UID: \"d8dea79f-de5c-4034-9742-c322b723a59c\") " pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.613350 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.613446 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:04 crc kubenswrapper[4765]: E0121 13:03:04.613483 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:04 crc kubenswrapper[4765]: E0121 13:03:04.613577 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.615458 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.630455 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.644065 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.659549 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.671459 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 17:57:23.524247509 +0000 UTC Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.681609 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.681637 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.681646 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.681664 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.681674 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:04Z","lastTransitionTime":"2026-01-21T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.689374 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.705576 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.727240 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.740364 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.756718 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.770331 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.782259 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.783926 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.783974 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.783987 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.784018 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.784031 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:04Z","lastTransitionTime":"2026-01-21T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.798089 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.809880 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:04Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.887813 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.887855 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.887864 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.887889 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.887910 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:04Z","lastTransitionTime":"2026-01-21T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.990617 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.990651 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.990660 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.990676 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:04 crc kubenswrapper[4765]: I0121 13:03:04.990691 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:04Z","lastTransitionTime":"2026-01-21T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.082297 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs\") pod \"network-metrics-daemon-4t7jw\" (UID: \"d8dea79f-de5c-4034-9742-c322b723a59c\") " pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:05 crc kubenswrapper[4765]: E0121 13:03:05.082474 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:03:05 crc kubenswrapper[4765]: E0121 13:03:05.082529 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs podName:d8dea79f-de5c-4034-9742-c322b723a59c nodeName:}" failed. No retries permitted until 2026-01-21 13:03:06.082514503 +0000 UTC m=+47.100240325 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs") pod "network-metrics-daemon-4t7jw" (UID: "d8dea79f-de5c-4034-9742-c322b723a59c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.093602 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.093653 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.093665 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.093685 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.093698 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:05Z","lastTransitionTime":"2026-01-21T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.196515 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.197089 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.197101 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.197119 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.197132 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:05Z","lastTransitionTime":"2026-01-21T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.376997 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.377034 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.377045 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.377064 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.377075 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:05Z","lastTransitionTime":"2026-01-21T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.487496 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.487821 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.487960 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.488051 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.488128 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:05Z","lastTransitionTime":"2026-01-21T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.587927 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" event={"ID":"22f3d99e-f58c-4caa-be45-b879c6b614d3","Type":"ContainerStarted","Data":"1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e"} Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.591130 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.591182 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.591196 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.591230 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.591240 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:05Z","lastTransitionTime":"2026-01-21T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.608913 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.613845 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:05 crc kubenswrapper[4765]: E0121 13:03:05.614311 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.625619 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.640648 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.654787 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.669155 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.672039 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:55:53.16865695 +0000 UTC Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.682004 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.693854 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.693894 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.693904 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.693921 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.693934 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:05Z","lastTransitionTime":"2026-01-21T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.698171 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.709928 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.724321 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.741533 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.755088 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.770654 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.785268 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.796313 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.796367 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.796378 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.796393 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.796403 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:05Z","lastTransitionTime":"2026-01-21T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.804532 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.820472 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.846541 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:05Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.899854 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.899919 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.899932 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.899953 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:05 crc kubenswrapper[4765]: I0121 13:03:05.899965 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:05Z","lastTransitionTime":"2026-01-21T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.002069 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.002432 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.002520 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.002605 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.002673 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:06Z","lastTransitionTime":"2026-01-21T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.094777 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs\") pod \"network-metrics-daemon-4t7jw\" (UID: \"d8dea79f-de5c-4034-9742-c322b723a59c\") " pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:06 crc kubenswrapper[4765]: E0121 13:03:06.094965 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:03:06 crc kubenswrapper[4765]: E0121 13:03:06.095031 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs podName:d8dea79f-de5c-4034-9742-c322b723a59c nodeName:}" failed. No retries permitted until 2026-01-21 13:03:08.095010542 +0000 UTC m=+49.112736364 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs") pod "network-metrics-daemon-4t7jw" (UID: "d8dea79f-de5c-4034-9742-c322b723a59c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.104905 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.104949 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.104960 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.104975 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.104985 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:06Z","lastTransitionTime":"2026-01-21T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.208015 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.208093 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.208450 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.208473 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.208504 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:06Z","lastTransitionTime":"2026-01-21T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.313536 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.313568 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.313578 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.313592 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.313601 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:06Z","lastTransitionTime":"2026-01-21T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.417321 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.417395 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.417457 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.417482 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.417496 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:06Z","lastTransitionTime":"2026-01-21T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.520554 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.520613 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.520630 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.520721 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.520737 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:06Z","lastTransitionTime":"2026-01-21T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.593696 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/0.log" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.597704 4765 generic.go:334] "Generic (PLEG): container finished" podID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerID="c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490" exitCode=1 Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.597779 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerDied","Data":"c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490"} Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.599127 4765 scope.go:117] "RemoveContainer" containerID="c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.612986 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.613038 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:06 crc kubenswrapper[4765]: E0121 13:03:06.613141 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:06 crc kubenswrapper[4765]: E0121 13:03:06.613343 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.613613 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:06 crc kubenswrapper[4765]: E0121 13:03:06.613692 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.618293 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.623842 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.624061 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.624177 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.624317 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.624430 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:06Z","lastTransitionTime":"2026-01-21T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.646912 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.662794 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.672605 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:20:35.000219631 +0000 UTC Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.677648 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.693635 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.710660 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.728116 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.728158 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.728170 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.728186 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.728197 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:06Z","lastTransitionTime":"2026-01-21T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.728126 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.741508 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.756854 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.786167 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.803855 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.817928 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.830504 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.830538 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.830547 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.830564 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.830576 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:06Z","lastTransitionTime":"2026-01-21T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.836970 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.857941 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"message\\\":\\\"er 3 for removal\\\\nI0121 13:03:05.768433 5888 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 13:03:05.767705 5888 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:05.768530 5888 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0121 13:03:05.768562 5888 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 13:03:05.767773 5888 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:05.767847 5888 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.769041 5888 factory.go:656] Stopping watch factory\\\\nI0121 13:03:05.768043 5888 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.768074 5888 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.768103 5888 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.768407 5888 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.878817 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.893955 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:06Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.933179 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.933234 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.933245 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.933263 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:06 crc kubenswrapper[4765]: I0121 13:03:06.933276 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:06Z","lastTransitionTime":"2026-01-21T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.035498 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.035549 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.035559 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.035577 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.035588 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:07Z","lastTransitionTime":"2026-01-21T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.138090 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.138141 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.138156 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.138176 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.138193 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:07Z","lastTransitionTime":"2026-01-21T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.240724 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.240769 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.240781 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.240799 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.240809 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:07Z","lastTransitionTime":"2026-01-21T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.343741 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.343787 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.343802 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.343821 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.343835 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:07Z","lastTransitionTime":"2026-01-21T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.446727 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.446777 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.446831 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.446853 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.446864 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:07Z","lastTransitionTime":"2026-01-21T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.549983 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.550020 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.550032 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.550049 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.550061 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:07Z","lastTransitionTime":"2026-01-21T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.603701 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/0.log" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.607425 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerStarted","Data":"e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1"} Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.608844 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.615652 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:07 crc kubenswrapper[4765]: E0121 13:03:07.615766 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.625984 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.641662 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.652907 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.652943 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.652954 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.652971 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.652981 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:07Z","lastTransitionTime":"2026-01-21T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.657625 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.673807 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 09:47:20.102547778 +0000 UTC Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.675955 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.698332 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"message\\\":\\\"er 3 for removal\\\\nI0121 13:03:05.768433 5888 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 13:03:05.767705 5888 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:05.768530 5888 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0121 13:03:05.768562 5888 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 13:03:05.767773 5888 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:05.767847 5888 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.769041 5888 factory.go:656] Stopping watch factory\\\\nI0121 13:03:05.768043 5888 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.768074 5888 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.768103 5888 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.768407 5888 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.714839 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.733190 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.754278 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.755423 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.755475 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.755489 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.755509 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.755520 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:07Z","lastTransitionTime":"2026-01-21T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.766359 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.778929 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.792257 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.803477 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.821189 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.841629 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.858578 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.858703 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.858720 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.858740 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.858752 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:07Z","lastTransitionTime":"2026-01-21T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.858852 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.871679 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:07Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.961386 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.961437 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.961450 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.961476 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:07 crc kubenswrapper[4765]: I0121 13:03:07.961491 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:07Z","lastTransitionTime":"2026-01-21T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.063790 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.063850 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.063862 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.063882 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.063896 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:08Z","lastTransitionTime":"2026-01-21T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:08 crc kubenswrapper[4765]: E0121 13:03:08.117254 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.117262 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs\") pod \"network-metrics-daemon-4t7jw\" (UID: \"d8dea79f-de5c-4034-9742-c322b723a59c\") " pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:08 crc kubenswrapper[4765]: E0121 13:03:08.117340 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs podName:d8dea79f-de5c-4034-9742-c322b723a59c nodeName:}" failed. No retries permitted until 2026-01-21 13:03:12.11732283 +0000 UTC m=+53.135048652 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs") pod "network-metrics-daemon-4t7jw" (UID: "d8dea79f-de5c-4034-9742-c322b723a59c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.166422 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.166464 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.166474 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.166529 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.166539 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:08Z","lastTransitionTime":"2026-01-21T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.269990 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.270049 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.270060 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.270079 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.270092 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:08Z","lastTransitionTime":"2026-01-21T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.372929 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.372979 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.372988 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.373003 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.373014 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:08Z","lastTransitionTime":"2026-01-21T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.475424 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.475481 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.475491 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.475509 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.475521 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:08Z","lastTransitionTime":"2026-01-21T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.578756 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.578800 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.578811 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.578829 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.578841 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:08Z","lastTransitionTime":"2026-01-21T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.612069 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/1.log" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.612585 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.612659 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.612743 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:08 crc kubenswrapper[4765]: E0121 13:03:08.612733 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:08 crc kubenswrapper[4765]: E0121 13:03:08.612844 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.612865 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/0.log" Jan 21 13:03:08 crc kubenswrapper[4765]: E0121 13:03:08.612985 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.615281 4765 generic.go:334] "Generic (PLEG): container finished" podID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerID="e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1" exitCode=1 Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.615322 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerDied","Data":"e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1"} Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.615404 4765 scope.go:117] "RemoveContainer" containerID="c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.615884 4765 scope.go:117] "RemoveContainer" containerID="e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1" Jan 21 13:03:08 crc kubenswrapper[4765]: E0121 13:03:08.616073 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x677d_openshift-ovn-kubernetes(cd80c14d-ebec-4d65-8116-149400d6f8be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.633525 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.647331 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.664794 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.673992 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:43:55.743624958 +0000 UTC Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.680926 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.682618 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.682746 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.682830 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.682910 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.683003 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:08Z","lastTransitionTime":"2026-01-21T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.698406 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.717167 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.737353 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.763084 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"message\\\":\\\"er 3 for removal\\\\nI0121 13:03:05.768433 5888 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 13:03:05.767705 5888 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:05.768530 5888 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0121 13:03:05.768562 5888 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 13:03:05.767773 5888 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:05.767847 5888 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.769041 5888 factory.go:656] Stopping watch factory\\\\nI0121 13:03:05.768043 5888 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.768074 5888 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.768103 5888 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.768407 5888 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:07Z\\\",\\\"message\\\":\\\"s/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.627832 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 13:03:07.628081 6124 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.628078 6124 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.627796 6124 factory.go:656] Stopping watch factory\\\\nI0121 13:03:07.628593 6124 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.628834 6124 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:03:07.629259 6124 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.629537 6124 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.629907 6124 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.779296 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.785841 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.786123 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.786186 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.786316 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.786396 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:08Z","lastTransitionTime":"2026-01-21T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.795407 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.815063 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.830385 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.848460 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.864237 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.880185 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.889940 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.889989 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.890000 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.890020 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.890031 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:08Z","lastTransitionTime":"2026-01-21T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.896694 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:08Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.992887 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.992942 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.992953 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.992978 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:08 crc kubenswrapper[4765]: I0121 13:03:08.992993 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:08Z","lastTransitionTime":"2026-01-21T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.096364 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.096428 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.096461 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.096483 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.096495 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:09Z","lastTransitionTime":"2026-01-21T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.199589 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.199654 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.199670 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.199689 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.199702 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:09Z","lastTransitionTime":"2026-01-21T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.302023 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.302068 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.302084 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.302103 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.302114 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:09Z","lastTransitionTime":"2026-01-21T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.405169 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.405230 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.405240 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.405262 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.405273 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:09Z","lastTransitionTime":"2026-01-21T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.508475 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.508515 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.508527 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.508545 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.508555 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:09Z","lastTransitionTime":"2026-01-21T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.611728 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.611791 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.611807 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.611830 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.611845 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:09Z","lastTransitionTime":"2026-01-21T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.613442 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:09 crc kubenswrapper[4765]: E0121 13:03:09.613615 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.619800 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/1.log" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.624028 4765 scope.go:117] "RemoveContainer" containerID="e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1" Jan 21 13:03:09 crc kubenswrapper[4765]: E0121 13:03:09.624229 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x677d_openshift-ovn-kubernetes(cd80c14d-ebec-4d65-8116-149400d6f8be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.630104 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.642905 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.655088 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.665962 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.675079 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:38:43.475715438 +0000 UTC Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.679640 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.693998 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.706530 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.714278 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.714354 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.714374 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.714425 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.714444 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:09Z","lastTransitionTime":"2026-01-21T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.718257 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.730769 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.743613 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.754820 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.768349 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.791466 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4c3f9c53b570ac1dfbaf209d29a9045a4d6a091151761364091faf478a6a490\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"message\\\":\\\"er 3 for removal\\\\nI0121 13:03:05.768433 5888 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 13:03:05.767705 5888 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:05.768530 5888 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0121 13:03:05.768562 5888 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0121 13:03:05.767773 5888 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:05.767847 5888 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.769041 5888 factory.go:656] Stopping watch factory\\\\nI0121 13:03:05.768043 5888 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.768074 5888 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.768103 5888 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:05.768407 5888 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:07Z\\\",\\\"message\\\":\\\"s/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.627832 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 13:03:07.628081 6124 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.628078 6124 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.627796 6124 factory.go:656] Stopping watch factory\\\\nI0121 13:03:07.628593 6124 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.628834 6124 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:03:07.629259 6124 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.629537 6124 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.629907 6124 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.806032 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.817665 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.817699 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.817711 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.817726 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.817739 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:09Z","lastTransitionTime":"2026-01-21T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.822681 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.836583 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.846788 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.856765 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.867402 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.879718 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.889866 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.905962 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.917741 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.920369 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.920437 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.920450 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.920467 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.920477 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:09Z","lastTransitionTime":"2026-01-21T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.927928 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.944079 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:09 crc kubenswrapper[4765]: I0121 13:03:09.969187 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.000427 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:09Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.023050 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.023094 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.023104 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.023120 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.023131 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:10Z","lastTransitionTime":"2026-01-21T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.023128 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:07Z\\\",\\\"message\\\":\\\"s/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.627832 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 13:03:07.628081 6124 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.628078 6124 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.627796 6124 factory.go:656] Stopping watch factory\\\\nI0121 13:03:07.628593 6124 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.628834 6124 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:03:07.629259 6124 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.629537 6124 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.629907 6124 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x677d_openshift-ovn-kubernetes(cd80c14d-ebec-4d65-8116-149400d6f8be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:10Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.037167 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:10Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.049783 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:10Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.061634 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:10Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.077534 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:10Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.126172 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.126584 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.126725 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.126862 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.126995 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:10Z","lastTransitionTime":"2026-01-21T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.230199 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.230611 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.230681 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.230750 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.230824 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:10Z","lastTransitionTime":"2026-01-21T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.333655 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.333692 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.333703 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.333721 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.333735 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:10Z","lastTransitionTime":"2026-01-21T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.435966 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.436030 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.436041 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.436057 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.436070 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:10Z","lastTransitionTime":"2026-01-21T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.538631 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.538671 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.538680 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.538696 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.538707 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:10Z","lastTransitionTime":"2026-01-21T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.613412 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.613500 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.613411 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:10 crc kubenswrapper[4765]: E0121 13:03:10.613599 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:10 crc kubenswrapper[4765]: E0121 13:03:10.613676 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:10 crc kubenswrapper[4765]: E0121 13:03:10.613807 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.641630 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.641671 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.641682 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.641700 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.641711 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:10Z","lastTransitionTime":"2026-01-21T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.675695 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 13:34:35.937036742 +0000 UTC Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.744990 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.745046 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.745059 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.745076 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.745093 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:10Z","lastTransitionTime":"2026-01-21T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.847927 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.847993 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.848007 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.848028 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.848042 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:10Z","lastTransitionTime":"2026-01-21T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.950573 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.950608 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.950618 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.950632 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:10 crc kubenswrapper[4765]: I0121 13:03:10.950646 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:10Z","lastTransitionTime":"2026-01-21T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.052452 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.052499 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.052510 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.052526 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.052537 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:11Z","lastTransitionTime":"2026-01-21T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.155735 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.155777 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.155786 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.155803 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.155815 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:11Z","lastTransitionTime":"2026-01-21T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.258506 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.258573 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.258586 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.258602 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.258614 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:11Z","lastTransitionTime":"2026-01-21T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.361388 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.361834 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.361948 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.362071 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.362187 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:11Z","lastTransitionTime":"2026-01-21T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.465647 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.465718 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.465731 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.465749 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.466102 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:11Z","lastTransitionTime":"2026-01-21T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.568641 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.568691 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.568702 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.568719 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.568731 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:11Z","lastTransitionTime":"2026-01-21T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.613545 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:11 crc kubenswrapper[4765]: E0121 13:03:11.613829 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.671669 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.671778 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.671806 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.671872 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.671909 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:11Z","lastTransitionTime":"2026-01-21T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.676692 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:23:38.506190024 +0000 UTC Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.774940 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.775418 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.775585 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.775743 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.775891 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:11Z","lastTransitionTime":"2026-01-21T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.879948 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.880003 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.880019 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.880042 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.880060 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:11Z","lastTransitionTime":"2026-01-21T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.982770 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.982806 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.982818 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.982836 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:11 crc kubenswrapper[4765]: I0121 13:03:11.982851 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:11Z","lastTransitionTime":"2026-01-21T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.067573 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.067637 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.067656 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.067719 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.067738 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:12Z","lastTransitionTime":"2026-01-21T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:12 crc kubenswrapper[4765]: E0121 13:03:12.086567 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:12Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.092757 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.092999 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.093022 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.093050 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.093073 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:12Z","lastTransitionTime":"2026-01-21T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:12 crc kubenswrapper[4765]: E0121 13:03:12.106140 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:12Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.109726 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.109757 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.109766 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.109782 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.109799 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:12Z","lastTransitionTime":"2026-01-21T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:12 crc kubenswrapper[4765]: E0121 13:03:12.122381 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:12Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.125558 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.125590 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.125602 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.125619 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.125631 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:12Z","lastTransitionTime":"2026-01-21T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:12 crc kubenswrapper[4765]: E0121 13:03:12.136818 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:12Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.140261 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.140296 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.140308 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.140326 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.140338 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:12Z","lastTransitionTime":"2026-01-21T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:12 crc kubenswrapper[4765]: E0121 13:03:12.151898 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:12Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:12 crc kubenswrapper[4765]: E0121 13:03:12.152086 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.153702 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.153749 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.153762 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.153778 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.153789 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:12Z","lastTransitionTime":"2026-01-21T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.164401 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs\") pod \"network-metrics-daemon-4t7jw\" (UID: \"d8dea79f-de5c-4034-9742-c322b723a59c\") " pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:12 crc kubenswrapper[4765]: E0121 13:03:12.164566 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:03:12 crc kubenswrapper[4765]: E0121 13:03:12.164639 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs podName:d8dea79f-de5c-4034-9742-c322b723a59c nodeName:}" failed. No retries permitted until 2026-01-21 13:03:20.164618318 +0000 UTC m=+61.182344220 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs") pod "network-metrics-daemon-4t7jw" (UID: "d8dea79f-de5c-4034-9742-c322b723a59c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.257975 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.258052 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.258075 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.258106 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.258129 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:12Z","lastTransitionTime":"2026-01-21T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.362283 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.362350 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.362373 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.362406 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.362430 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:12Z","lastTransitionTime":"2026-01-21T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.465284 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.465332 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.465345 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.465364 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.465377 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:12Z","lastTransitionTime":"2026-01-21T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.568665 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.568717 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.568729 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.568747 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.568760 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:12Z","lastTransitionTime":"2026-01-21T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.613323 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.613390 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.613361 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:12 crc kubenswrapper[4765]: E0121 13:03:12.613508 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:12 crc kubenswrapper[4765]: E0121 13:03:12.613594 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:12 crc kubenswrapper[4765]: E0121 13:03:12.613657 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.671207 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.671255 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.671273 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.671294 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.671306 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:12Z","lastTransitionTime":"2026-01-21T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.677324 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 17:29:55.417773686 +0000 UTC Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.773536 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.773586 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.773599 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.773620 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.773631 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:12Z","lastTransitionTime":"2026-01-21T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.876702 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.876743 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.876757 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.876793 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.876809 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:12Z","lastTransitionTime":"2026-01-21T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.978976 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.979022 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.979036 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.979053 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:12 crc kubenswrapper[4765]: I0121 13:03:12.979065 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:12Z","lastTransitionTime":"2026-01-21T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.081718 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.081754 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.081770 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.081786 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.081796 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:13Z","lastTransitionTime":"2026-01-21T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.184774 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.184863 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.184880 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.184938 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.184954 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:13Z","lastTransitionTime":"2026-01-21T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.287625 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.287676 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.287688 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.287707 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.287720 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:13Z","lastTransitionTime":"2026-01-21T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.389915 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.389956 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.389970 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.389986 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.389997 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:13Z","lastTransitionTime":"2026-01-21T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.478335 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.478505 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.478545 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.478579 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.478609 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:13 crc kubenswrapper[4765]: E0121 13:03:13.478759 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:03:45.478706217 +0000 UTC m=+86.496432049 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:03:13 crc kubenswrapper[4765]: E0121 13:03:13.478776 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:03:13 crc kubenswrapper[4765]: E0121 13:03:13.478818 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:03:13 crc kubenswrapper[4765]: E0121 13:03:13.478848 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:03:13 crc kubenswrapper[4765]: E0121 13:03:13.478873 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:03:13 crc kubenswrapper[4765]: E0121 13:03:13.478884 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:03:45.478865201 +0000 UTC m=+86.496591073 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:03:13 crc kubenswrapper[4765]: E0121 13:03:13.478933 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:03:45.478921613 +0000 UTC m=+86.496647535 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:03:13 crc kubenswrapper[4765]: E0121 13:03:13.479084 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:03:13 crc kubenswrapper[4765]: E0121 13:03:13.479098 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:03:13 crc kubenswrapper[4765]: E0121 13:03:13.479110 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:03:13 crc kubenswrapper[4765]: E0121 13:03:13.479144 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:03:45.479130008 +0000 UTC m=+86.496855950 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:03:13 crc kubenswrapper[4765]: E0121 13:03:13.479197 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:03:13 crc kubenswrapper[4765]: E0121 13:03:13.479317 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:03:45.479296333 +0000 UTC m=+86.497022225 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.492770 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.492811 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.492821 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.492836 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.492847 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:13Z","lastTransitionTime":"2026-01-21T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.595129 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.595176 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.595189 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.595221 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.595245 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:13Z","lastTransitionTime":"2026-01-21T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.612674 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:13 crc kubenswrapper[4765]: E0121 13:03:13.612825 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.678039 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 00:24:26.801036827 +0000 UTC Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.698457 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.698504 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.698515 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.698533 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.698544 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:13Z","lastTransitionTime":"2026-01-21T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.801183 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.801247 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.801259 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.801277 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.801289 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:13Z","lastTransitionTime":"2026-01-21T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.904990 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.905389 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.905510 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.905623 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:13 crc kubenswrapper[4765]: I0121 13:03:13.905712 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:13Z","lastTransitionTime":"2026-01-21T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.009041 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.009097 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.009116 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.009145 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.009163 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:14Z","lastTransitionTime":"2026-01-21T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.112294 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.112818 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.112976 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.113144 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.113350 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:14Z","lastTransitionTime":"2026-01-21T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.216801 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.216860 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.216915 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.216943 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.216971 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:14Z","lastTransitionTime":"2026-01-21T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.261510 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.274752 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.280466 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.293599 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.309136 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.319883 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.319944 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.319956 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.319977 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.319990 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:14Z","lastTransitionTime":"2026-01-21T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.328911 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.349853 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:07Z\\\",\\\"message\\\":\\\"s/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.627832 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 13:03:07.628081 6124 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.628078 6124 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.627796 6124 factory.go:656] Stopping watch factory\\\\nI0121 13:03:07.628593 6124 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.628834 6124 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:03:07.629259 6124 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.629537 6124 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.629907 6124 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x677d_openshift-ovn-kubernetes(cd80c14d-ebec-4d65-8116-149400d6f8be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.365576 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.376680 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.389988 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.401340 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.413054 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.422441 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.422723 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.422822 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.422920 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.423024 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:14Z","lastTransitionTime":"2026-01-21T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.427019 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.437946 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.449586 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.464635 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.481697 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.496565 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:14Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.525969 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.526014 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.526027 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.526045 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.526058 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:14Z","lastTransitionTime":"2026-01-21T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.613374 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.613441 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.613498 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:14 crc kubenswrapper[4765]: E0121 13:03:14.613562 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:14 crc kubenswrapper[4765]: E0121 13:03:14.613692 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:14 crc kubenswrapper[4765]: E0121 13:03:14.613795 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.628414 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.628469 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.628477 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.628495 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.628506 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:14Z","lastTransitionTime":"2026-01-21T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.678357 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 17:40:37.627799093 +0000 UTC Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.731745 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.731795 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.731805 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.731823 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.731834 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:14Z","lastTransitionTime":"2026-01-21T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.834876 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.834954 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.834972 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.835001 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.835019 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:14Z","lastTransitionTime":"2026-01-21T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.937814 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.938287 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.938418 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.938533 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:14 crc kubenswrapper[4765]: I0121 13:03:14.938629 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:14Z","lastTransitionTime":"2026-01-21T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.041693 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.041743 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.041764 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.041790 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.041808 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:15Z","lastTransitionTime":"2026-01-21T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.144108 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.144145 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.144154 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.144168 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.144178 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:15Z","lastTransitionTime":"2026-01-21T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.248101 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.248671 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.248876 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.249109 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.249289 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:15Z","lastTransitionTime":"2026-01-21T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.352362 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.352721 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.352861 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.352965 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.353057 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:15Z","lastTransitionTime":"2026-01-21T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.456307 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.456364 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.456378 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.456398 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.456410 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:15Z","lastTransitionTime":"2026-01-21T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.559714 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.559776 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.559795 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.559822 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.559835 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:15Z","lastTransitionTime":"2026-01-21T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.613118 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:15 crc kubenswrapper[4765]: E0121 13:03:15.613365 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.662867 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.662921 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.662935 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.662955 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.662970 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:15Z","lastTransitionTime":"2026-01-21T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.679526 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:57:41.840146957 +0000 UTC Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.765541 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.765606 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.765625 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.765654 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.765675 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:15Z","lastTransitionTime":"2026-01-21T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.868285 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.868341 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.868350 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.868366 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.868376 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:15Z","lastTransitionTime":"2026-01-21T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.977409 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.977488 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.977500 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.977517 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:15 crc kubenswrapper[4765]: I0121 13:03:15.977529 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:15Z","lastTransitionTime":"2026-01-21T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.080984 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.081037 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.081051 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.081070 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.081083 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:16Z","lastTransitionTime":"2026-01-21T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.183942 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.184053 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.184071 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.184099 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.184114 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:16Z","lastTransitionTime":"2026-01-21T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.286167 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.286208 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.286239 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.286254 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.286262 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:16Z","lastTransitionTime":"2026-01-21T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.388878 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.388921 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.388931 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.388945 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.388954 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:16Z","lastTransitionTime":"2026-01-21T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.491962 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.492036 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.492048 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.492064 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.492074 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:16Z","lastTransitionTime":"2026-01-21T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.599993 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.600047 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.600065 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.600083 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.600096 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:16Z","lastTransitionTime":"2026-01-21T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.612703 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.612736 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.612704 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:16 crc kubenswrapper[4765]: E0121 13:03:16.612839 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:16 crc kubenswrapper[4765]: E0121 13:03:16.612913 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:16 crc kubenswrapper[4765]: E0121 13:03:16.613011 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.680644 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:24:04.417785923 +0000 UTC Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.702601 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.702629 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.702639 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.702652 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.702661 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:16Z","lastTransitionTime":"2026-01-21T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.804678 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.804708 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.804718 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.804737 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.804749 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:16Z","lastTransitionTime":"2026-01-21T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.907896 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.907967 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.907991 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.908021 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:16 crc kubenswrapper[4765]: I0121 13:03:16.908041 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:16Z","lastTransitionTime":"2026-01-21T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.011631 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.011665 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.011675 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.011690 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.011702 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:17Z","lastTransitionTime":"2026-01-21T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.113657 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.113703 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.113715 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.113732 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.113742 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:17Z","lastTransitionTime":"2026-01-21T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.216108 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.216153 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.216166 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.216184 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.216195 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:17Z","lastTransitionTime":"2026-01-21T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.319053 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.319101 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.319120 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.319145 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.319163 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:17Z","lastTransitionTime":"2026-01-21T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.423122 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.423192 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.423235 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.423268 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.423295 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:17Z","lastTransitionTime":"2026-01-21T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.525865 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.525931 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.525945 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.525966 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.525977 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:17Z","lastTransitionTime":"2026-01-21T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.613396 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:17 crc kubenswrapper[4765]: E0121 13:03:17.613569 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.628544 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.628583 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.628593 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.628608 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.628620 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:17Z","lastTransitionTime":"2026-01-21T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.681165 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:34:04.213450769 +0000 UTC Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.730970 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.731051 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.731070 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.731096 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.731113 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:17Z","lastTransitionTime":"2026-01-21T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.833731 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.833800 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.833832 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.833858 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.833895 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:17Z","lastTransitionTime":"2026-01-21T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.936959 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.937007 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.937017 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.937033 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:17 crc kubenswrapper[4765]: I0121 13:03:17.937079 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:17Z","lastTransitionTime":"2026-01-21T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.039917 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.039973 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.039993 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.040016 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.040029 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:18Z","lastTransitionTime":"2026-01-21T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.142336 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.142380 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.142394 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.142411 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.142421 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:18Z","lastTransitionTime":"2026-01-21T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.305519 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.305584 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.305597 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.305615 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.305625 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:18Z","lastTransitionTime":"2026-01-21T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.408613 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.408662 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.408679 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.408702 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.408714 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:18Z","lastTransitionTime":"2026-01-21T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.511044 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.511086 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.511097 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.511115 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.511125 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:18Z","lastTransitionTime":"2026-01-21T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.612795 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.612928 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.613054 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:18 crc kubenswrapper[4765]: E0121 13:03:18.613051 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:18 crc kubenswrapper[4765]: E0121 13:03:18.613361 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.613487 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.613518 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.613529 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.613544 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.613554 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:18Z","lastTransitionTime":"2026-01-21T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:18 crc kubenswrapper[4765]: E0121 13:03:18.613562 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.681704 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:48:20.744098801 +0000 UTC Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.716310 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.716359 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.716371 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.716388 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.716400 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:18Z","lastTransitionTime":"2026-01-21T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.819329 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.819380 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.819392 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.819413 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.819426 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:18Z","lastTransitionTime":"2026-01-21T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.921983 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.922028 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.922039 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.922058 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:18 crc kubenswrapper[4765]: I0121 13:03:18.922070 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:18Z","lastTransitionTime":"2026-01-21T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.025452 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.025872 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.025962 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.026061 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.026143 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:19Z","lastTransitionTime":"2026-01-21T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.129398 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.129449 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.129465 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.129481 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.129494 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:19Z","lastTransitionTime":"2026-01-21T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.232361 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.232392 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.232400 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.232416 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.232426 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:19Z","lastTransitionTime":"2026-01-21T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.335575 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.335641 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.335650 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.335666 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.335692 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:19Z","lastTransitionTime":"2026-01-21T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.441681 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.442040 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.442111 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.442188 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.442299 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:19Z","lastTransitionTime":"2026-01-21T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.545537 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.545571 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.545579 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.545596 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.545605 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:19Z","lastTransitionTime":"2026-01-21T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.613404 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:19 crc kubenswrapper[4765]: E0121 13:03:19.613592 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.627439 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.641250 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.648511 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.648596 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.648613 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.648634 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.648647 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:19Z","lastTransitionTime":"2026-01-21T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.652098 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.666293 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.682536 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:30:06.715628354 +0000 UTC Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.682628 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.696741 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.710483 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.722304 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.736152 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8b07e5-b316-45ac-8511-cb09b9d4d3bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5111055c302cebecfb649ba86b3c51d36213cdbebe7c90c5aadea87dc93399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a549a3dc26287c8cab6ffaaf643a3b7a9aee3ba27f10f0741c11412d152b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e73f5c3b6b993cba5ad746efdbe1e24cb5bd1ac653a80d6c47eaaff07d917eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.751370 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.751424 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.751435 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.751449 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.751458 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:19Z","lastTransitionTime":"2026-01-21T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.759912 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.775004 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.787678 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.803847 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.823842 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:07Z\\\",\\\"message\\\":\\\"s/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.627832 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 13:03:07.628081 6124 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.628078 6124 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.627796 6124 factory.go:656] Stopping watch factory\\\\nI0121 13:03:07.628593 6124 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.628834 6124 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:03:07.629259 6124 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.629537 6124 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.629907 6124 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x677d_openshift-ovn-kubernetes(cd80c14d-ebec-4d65-8116-149400d6f8be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.839293 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.854976 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.855014 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.855027 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.855043 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.855053 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:19Z","lastTransitionTime":"2026-01-21T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.855740 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.871059 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:19Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.957462 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.958801 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.958911 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.958997 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:19 crc kubenswrapper[4765]: I0121 13:03:19.959080 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:19Z","lastTransitionTime":"2026-01-21T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.062351 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.062770 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.062984 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.063163 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.063351 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:20Z","lastTransitionTime":"2026-01-21T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.167650 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.168058 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.168156 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.168280 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.168384 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:20Z","lastTransitionTime":"2026-01-21T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.252485 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs\") pod \"network-metrics-daemon-4t7jw\" (UID: \"d8dea79f-de5c-4034-9742-c322b723a59c\") " pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:20 crc kubenswrapper[4765]: E0121 13:03:20.252757 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:03:20 crc kubenswrapper[4765]: E0121 13:03:20.252902 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs podName:d8dea79f-de5c-4034-9742-c322b723a59c nodeName:}" failed. No retries permitted until 2026-01-21 13:03:36.252868171 +0000 UTC m=+77.270594203 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs") pod "network-metrics-daemon-4t7jw" (UID: "d8dea79f-de5c-4034-9742-c322b723a59c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.271413 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.271469 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.271486 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.271511 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.271528 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:20Z","lastTransitionTime":"2026-01-21T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.374537 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.374574 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.374585 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.374599 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.374607 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:20Z","lastTransitionTime":"2026-01-21T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.477150 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.477203 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.477234 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.477255 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.477269 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:20Z","lastTransitionTime":"2026-01-21T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.580186 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.580240 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.580254 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.580271 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.580285 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:20Z","lastTransitionTime":"2026-01-21T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.613110 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:20 crc kubenswrapper[4765]: E0121 13:03:20.613291 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.613706 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:20 crc kubenswrapper[4765]: E0121 13:03:20.613772 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.613843 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:20 crc kubenswrapper[4765]: E0121 13:03:20.613902 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.682673 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 04:58:41.07715226 +0000 UTC Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.682841 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.682882 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.682893 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.682908 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.682918 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:20Z","lastTransitionTime":"2026-01-21T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.786448 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.786482 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.786491 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.786504 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.786514 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:20Z","lastTransitionTime":"2026-01-21T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.889002 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.889048 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.889064 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.889090 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.889103 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:20Z","lastTransitionTime":"2026-01-21T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.991519 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.991561 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.991576 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.991592 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:20 crc kubenswrapper[4765]: I0121 13:03:20.991602 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:20Z","lastTransitionTime":"2026-01-21T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.094032 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.094099 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.094116 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.094138 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.094154 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:21Z","lastTransitionTime":"2026-01-21T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.197088 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.197150 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.197173 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.197197 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.197251 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:21Z","lastTransitionTime":"2026-01-21T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.300125 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.300191 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.300201 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.300409 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.300422 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:21Z","lastTransitionTime":"2026-01-21T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.403091 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.403133 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.403141 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.403159 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.403170 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:21Z","lastTransitionTime":"2026-01-21T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.506365 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.506419 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.506430 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.506451 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.506465 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:21Z","lastTransitionTime":"2026-01-21T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.609004 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.609118 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.609140 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.609173 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.609196 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:21Z","lastTransitionTime":"2026-01-21T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.613432 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:21 crc kubenswrapper[4765]: E0121 13:03:21.613625 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.683706 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 13:54:32.323317177 +0000 UTC Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.712401 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.712457 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.712471 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.712493 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.712510 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:21Z","lastTransitionTime":"2026-01-21T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.815823 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.815872 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.815885 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.815906 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.815925 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:21Z","lastTransitionTime":"2026-01-21T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.918920 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.918968 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.918978 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.918995 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:21 crc kubenswrapper[4765]: I0121 13:03:21.919006 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:21Z","lastTransitionTime":"2026-01-21T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.022453 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.022507 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.022517 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.022539 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.022557 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:22Z","lastTransitionTime":"2026-01-21T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.124856 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.124899 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.124910 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.124926 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.124944 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:22Z","lastTransitionTime":"2026-01-21T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.227866 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.227932 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.227949 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.227969 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.227982 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:22Z","lastTransitionTime":"2026-01-21T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.330727 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.330775 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.330787 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.330806 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.330822 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:22Z","lastTransitionTime":"2026-01-21T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.372035 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.372095 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.372113 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.372138 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.372159 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:22Z","lastTransitionTime":"2026-01-21T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:22 crc kubenswrapper[4765]: E0121 13:03:22.393097 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.397665 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.397723 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.397736 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.397757 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.397771 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:22Z","lastTransitionTime":"2026-01-21T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:22 crc kubenswrapper[4765]: E0121 13:03:22.413009 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.416982 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.417046 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.417059 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.417078 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.417091 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:22Z","lastTransitionTime":"2026-01-21T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:22 crc kubenswrapper[4765]: E0121 13:03:22.431340 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.436819 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.436863 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.436872 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.436887 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.436898 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:22Z","lastTransitionTime":"2026-01-21T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:22 crc kubenswrapper[4765]: E0121 13:03:22.451184 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.454954 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.455011 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.455025 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.455057 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.455069 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:22Z","lastTransitionTime":"2026-01-21T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:22 crc kubenswrapper[4765]: E0121 13:03:22.474124 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:22Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:22 crc kubenswrapper[4765]: E0121 13:03:22.474357 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.476563 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.476599 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.476608 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.476626 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.476637 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:22Z","lastTransitionTime":"2026-01-21T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.580670 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.580729 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.580745 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.580766 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.580779 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:22Z","lastTransitionTime":"2026-01-21T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.613275 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.613271 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.613292 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:22 crc kubenswrapper[4765]: E0121 13:03:22.613940 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:22 crc kubenswrapper[4765]: E0121 13:03:22.613982 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:22 crc kubenswrapper[4765]: E0121 13:03:22.613710 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.682632 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.682685 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.682696 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.682713 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.682723 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:22Z","lastTransitionTime":"2026-01-21T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.684853 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:21:19.896951326 +0000 UTC Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.785828 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.785882 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.785892 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.785910 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.785920 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:22Z","lastTransitionTime":"2026-01-21T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.888689 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.888753 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.888770 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.888791 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.888806 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:22Z","lastTransitionTime":"2026-01-21T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.991104 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.991550 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.991638 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.991744 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:22 crc kubenswrapper[4765]: I0121 13:03:22.991828 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:22Z","lastTransitionTime":"2026-01-21T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.093814 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.093852 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.093864 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.093881 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.093891 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:23Z","lastTransitionTime":"2026-01-21T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.197316 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.197383 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.197398 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.197421 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.197438 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:23Z","lastTransitionTime":"2026-01-21T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.300468 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.300520 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.300532 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.300551 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.300564 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:23Z","lastTransitionTime":"2026-01-21T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.403181 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.403252 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.403264 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.403281 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.403292 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:23Z","lastTransitionTime":"2026-01-21T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.505839 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.505897 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.505931 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.505947 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.505957 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:23Z","lastTransitionTime":"2026-01-21T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.609203 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.609280 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.609289 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.609305 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.609316 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:23Z","lastTransitionTime":"2026-01-21T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.613626 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:23 crc kubenswrapper[4765]: E0121 13:03:23.613765 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.614522 4765 scope.go:117] "RemoveContainer" containerID="e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.686037 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:15:00.082819254 +0000 UTC Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.711775 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.711821 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.711832 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.711848 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.711860 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:23Z","lastTransitionTime":"2026-01-21T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.814622 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.814667 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.814676 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.814696 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.814707 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:23Z","lastTransitionTime":"2026-01-21T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.917353 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.917406 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.917417 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.917431 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:23 crc kubenswrapper[4765]: I0121 13:03:23.917442 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:23Z","lastTransitionTime":"2026-01-21T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.019963 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.020018 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.020028 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.020047 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.020058 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:24Z","lastTransitionTime":"2026-01-21T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.122940 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.122995 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.123008 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.123027 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.123038 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:24Z","lastTransitionTime":"2026-01-21T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.227598 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.228129 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.228168 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.228195 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.228239 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:24Z","lastTransitionTime":"2026-01-21T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.330946 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.331005 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.331018 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.331042 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.331056 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:24Z","lastTransitionTime":"2026-01-21T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.433301 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.433376 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.433389 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.433408 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.433419 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:24Z","lastTransitionTime":"2026-01-21T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.535726 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.535778 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.535789 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.535806 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.535819 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:24Z","lastTransitionTime":"2026-01-21T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.614464 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:24 crc kubenswrapper[4765]: E0121 13:03:24.614668 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.614940 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.615229 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:24 crc kubenswrapper[4765]: E0121 13:03:24.615402 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:24 crc kubenswrapper[4765]: E0121 13:03:24.615632 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.629141 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.638440 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.638478 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.638487 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.638503 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.638515 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:24Z","lastTransitionTime":"2026-01-21T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.672830 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/1.log" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.676252 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerStarted","Data":"3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1"} Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.677260 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.691339 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:10:02.463339807 +0000 UTC Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.702663 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8b07e5-b316-45ac-8511-cb09b9d4d3bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5111055c302cebecfb649ba86b3c51d36213cdbebe7c90c5aadea87dc93399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a549a3dc26287c8cab6ffaaf643a3b7a9aee3ba27f10f0741c11412d152b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e73f5c3b6b993cba5ad746efdbe1e24cb5bd1ac653a80d6c47eaaff07d917eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.718679 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014b5379-702d-46a3-a4c7-081c286a5c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4bb3739eb8cd7744b7117f4db0817ff3feb326f9016dedb4bfb5dc0614ed0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.735196 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.741088 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.741123 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.741134 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.741149 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.741160 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:24Z","lastTransitionTime":"2026-01-21T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.756281 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.773708 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.800027 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:07Z\\\",\\\"message\\\":\\\"s/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.627832 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 13:03:07.628081 6124 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.628078 6124 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.627796 6124 factory.go:656] Stopping watch factory\\\\nI0121 13:03:07.628593 6124 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.628834 6124 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:03:07.629259 6124 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.629537 6124 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.629907 6124 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.816552 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.833221 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.843720 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.843769 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.843779 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.843798 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.843812 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:24Z","lastTransitionTime":"2026-01-21T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.854811 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.872286 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.892624 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.911381 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.927805 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.943358 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.946052 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.946106 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.946118 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.946137 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.946149 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:24Z","lastTransitionTime":"2026-01-21T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.955272 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.966046 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.981111 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:24 crc kubenswrapper[4765]: I0121 13:03:24.994972 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.049549 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.049588 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.049599 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.049614 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.049624 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:25Z","lastTransitionTime":"2026-01-21T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.152334 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.152400 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.152414 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.152434 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.152445 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:25Z","lastTransitionTime":"2026-01-21T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.254624 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.254660 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.254669 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.254686 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.254696 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:25Z","lastTransitionTime":"2026-01-21T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.357151 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.357204 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.357227 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.357245 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.357257 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:25Z","lastTransitionTime":"2026-01-21T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.459269 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.459309 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.459318 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.459334 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.459342 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:25Z","lastTransitionTime":"2026-01-21T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.562320 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.562375 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.562388 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.562407 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.562419 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:25Z","lastTransitionTime":"2026-01-21T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.613268 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:25 crc kubenswrapper[4765]: E0121 13:03:25.613415 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.665104 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.665148 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.665159 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.665174 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.665184 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:25Z","lastTransitionTime":"2026-01-21T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.691545 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 07:52:57.426240852 +0000 UTC Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.767845 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.767896 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.767908 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.767928 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.767942 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:25Z","lastTransitionTime":"2026-01-21T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.870559 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.870613 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.870623 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.870640 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.870653 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:25Z","lastTransitionTime":"2026-01-21T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.973613 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.973678 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.973696 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.973716 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:25 crc kubenswrapper[4765]: I0121 13:03:25.973732 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:25Z","lastTransitionTime":"2026-01-21T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.076427 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.076477 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.076490 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.076509 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.076520 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:26Z","lastTransitionTime":"2026-01-21T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.179447 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.179494 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.179506 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.179546 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.179559 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:26Z","lastTransitionTime":"2026-01-21T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.281970 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.282013 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.282023 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.282040 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.282051 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:26Z","lastTransitionTime":"2026-01-21T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.384283 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.384332 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.384343 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.384359 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.384388 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:26Z","lastTransitionTime":"2026-01-21T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.487018 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.487057 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.487066 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.487083 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.487096 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:26Z","lastTransitionTime":"2026-01-21T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.589444 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.589502 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.589513 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.589532 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.589547 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:26Z","lastTransitionTime":"2026-01-21T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.612885 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.612962 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:26 crc kubenswrapper[4765]: E0121 13:03:26.613044 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.613063 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:26 crc kubenswrapper[4765]: E0121 13:03:26.613319 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:26 crc kubenswrapper[4765]: E0121 13:03:26.613384 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.684248 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/2.log" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.684783 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/1.log" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.687726 4765 generic.go:334] "Generic (PLEG): container finished" podID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerID="3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1" exitCode=1 Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.687797 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerDied","Data":"3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1"} Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.687858 4765 scope.go:117] "RemoveContainer" containerID="e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.688934 4765 scope.go:117] "RemoveContainer" containerID="3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1" Jan 21 13:03:26 crc kubenswrapper[4765]: E0121 13:03:26.689108 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x677d_openshift-ovn-kubernetes(cd80c14d-ebec-4d65-8116-149400d6f8be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.691699 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:08:42.16359445 +0000 UTC Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.693938 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.693983 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.693994 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.694013 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.694028 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:26Z","lastTransitionTime":"2026-01-21T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.702718 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8b07e5-b316-45ac-8511-cb09b9d4d3bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5111055c302cebecfb649ba86b3c51d36213cdbebe7c90c5aadea87dc93399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a549a3dc26287c8cab6ffaaf643a3b7a9aee3ba27f10f0741c11412d152b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e73f5c3b6b993cba5ad746efdbe1e24cb5bd1ac653a80d6c47eaaff07d917eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.715476 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014b5379-702d-46a3-a4c7-081c286a5c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4bb3739eb8cd7744b7117f4db0817ff3feb326f9016dedb4bfb5dc0614ed0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.733452 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.747648 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.760649 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.775475 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.796341 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.796382 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.796394 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.796413 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.796427 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:26Z","lastTransitionTime":"2026-01-21T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.797832 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.812329 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.829707 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.851807 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:07Z\\\",\\\"message\\\":\\\"s/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.627832 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 13:03:07.628081 6124 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.628078 6124 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.627796 6124 factory.go:656] Stopping watch factory\\\\nI0121 13:03:07.628593 6124 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.628834 6124 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:03:07.629259 6124 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.629537 6124 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.629907 6124 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:25Z\\\",\\\"message\\\":\\\"03:24.912748 6345 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 13:03:24.912867 6345 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.866773 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.883114 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.899813 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.899860 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.899872 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.899891 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.899904 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:26Z","lastTransitionTime":"2026-01-21T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.901852 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.915951 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.926702 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.938606 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.955901 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:26 crc kubenswrapper[4765]: I0121 13:03:26.970557 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:26Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.002922 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.002979 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.002991 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.003010 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.003021 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:27Z","lastTransitionTime":"2026-01-21T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.106242 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.106281 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.106293 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.106313 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.106325 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:27Z","lastTransitionTime":"2026-01-21T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.209500 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.209541 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.209552 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.209572 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.209587 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:27Z","lastTransitionTime":"2026-01-21T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.313485 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.313535 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.313547 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.313566 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.313577 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:27Z","lastTransitionTime":"2026-01-21T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.415996 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.416056 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.416073 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.416094 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.416111 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:27Z","lastTransitionTime":"2026-01-21T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.518461 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.518495 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.518505 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.518522 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.518533 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:27Z","lastTransitionTime":"2026-01-21T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.615815 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:27 crc kubenswrapper[4765]: E0121 13:03:27.615944 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.620504 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.620528 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.620537 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.620550 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.620560 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:27Z","lastTransitionTime":"2026-01-21T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.691786 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:31:21.272919308 +0000 UTC Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.692592 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/2.log" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.723358 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.723417 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.723429 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.723491 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.723505 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:27Z","lastTransitionTime":"2026-01-21T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.826266 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.826312 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.826406 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.826460 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.826471 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:27Z","lastTransitionTime":"2026-01-21T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.929118 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.929497 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.929578 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.929645 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:27 crc kubenswrapper[4765]: I0121 13:03:27.929710 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:27Z","lastTransitionTime":"2026-01-21T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.032058 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.032099 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.032107 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.032122 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.032134 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:28Z","lastTransitionTime":"2026-01-21T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.134162 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.134224 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.134239 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.134256 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.134269 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:28Z","lastTransitionTime":"2026-01-21T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.237577 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.238270 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.238397 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.238475 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.238530 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:28Z","lastTransitionTime":"2026-01-21T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.341341 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.341576 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.341588 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.341610 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.341626 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:28Z","lastTransitionTime":"2026-01-21T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.444708 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.444761 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.444776 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.444796 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.444806 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:28Z","lastTransitionTime":"2026-01-21T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.548310 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.548631 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.548696 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.548827 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.548913 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:28Z","lastTransitionTime":"2026-01-21T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.613014 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.613106 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:28 crc kubenswrapper[4765]: E0121 13:03:28.613244 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:28 crc kubenswrapper[4765]: E0121 13:03:28.613430 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.613590 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:28 crc kubenswrapper[4765]: E0121 13:03:28.613789 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.652286 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.652452 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.652520 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.652595 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.652659 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:28Z","lastTransitionTime":"2026-01-21T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.692466 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 13:15:14.439781153 +0000 UTC Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.756236 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.756300 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.756315 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.756340 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.756350 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:28Z","lastTransitionTime":"2026-01-21T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.862156 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.862197 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.862232 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.862249 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.862258 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:28Z","lastTransitionTime":"2026-01-21T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.964089 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.964128 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.964138 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.964152 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:28 crc kubenswrapper[4765]: I0121 13:03:28.964162 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:28Z","lastTransitionTime":"2026-01-21T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.067098 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.067146 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.067155 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.067172 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.067185 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:29Z","lastTransitionTime":"2026-01-21T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.170612 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.170669 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.170682 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.170701 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.170713 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:29Z","lastTransitionTime":"2026-01-21T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.273291 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.273346 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.273356 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.273372 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.273383 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:29Z","lastTransitionTime":"2026-01-21T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.376267 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.376318 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.376330 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.376349 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.376374 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:29Z","lastTransitionTime":"2026-01-21T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.479315 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.479394 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.479408 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.479425 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.479439 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:29Z","lastTransitionTime":"2026-01-21T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.582143 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.582183 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.582194 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.582228 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.582242 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:29Z","lastTransitionTime":"2026-01-21T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.612936 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:29 crc kubenswrapper[4765]: E0121 13:03:29.613129 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.626921 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8b07e5-b316-45ac-8511-cb09b9d4d3bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5111055c302cebecfb649ba86b3c51d36213cdbebe7c90c5aadea87dc93399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a549a3dc26287c8cab6ffaaf643a3b7a9aee3ba27f10f0741c11412d152b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e73f5c3b6b993cba5ad746efdbe1e24cb5bd1ac653a80d6c47eaaff07d917eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.637592 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014b5379-702d-46a3-a4c7-081c286a5c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4bb3739eb8cd7744b7117f4db0817ff3feb326f9016dedb4bfb5dc0614ed0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.651182 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.663683 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.676323 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.685779 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.685814 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.685821 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.685837 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.685866 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:29Z","lastTransitionTime":"2026-01-21T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.689435 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.694305 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 13:53:37.623624035 +0000 UTC Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.703407 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.716606 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.730755 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.754571 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:07Z\\\",\\\"message\\\":\\\"s/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.627832 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 13:03:07.628081 6124 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.628078 6124 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.627796 6124 factory.go:656] Stopping watch factory\\\\nI0121 13:03:07.628593 6124 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.628834 6124 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:03:07.629259 6124 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.629537 6124 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.629907 6124 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:25Z\\\",\\\"message\\\":\\\"03:24.912748 6345 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 13:03:24.912867 6345 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.768748 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.782858 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.789182 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.789249 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.789262 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.789282 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.789295 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:29Z","lastTransitionTime":"2026-01-21T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.800881 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.817631 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.826977 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.838911 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.848524 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.861118 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:29Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.892084 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.892134 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.892145 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.892162 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.892174 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:29Z","lastTransitionTime":"2026-01-21T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.995759 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.995802 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.995814 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.995850 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:29 crc kubenswrapper[4765]: I0121 13:03:29.995863 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:29Z","lastTransitionTime":"2026-01-21T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.098111 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.098149 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.098160 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.098180 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.098193 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:30Z","lastTransitionTime":"2026-01-21T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.200781 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.200830 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.200842 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.200858 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.200869 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:30Z","lastTransitionTime":"2026-01-21T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.303772 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.303859 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.303871 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.303887 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.303901 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:30Z","lastTransitionTime":"2026-01-21T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.406648 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.406702 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.406713 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.406737 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.406749 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:30Z","lastTransitionTime":"2026-01-21T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.509641 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.509684 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.509695 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.509712 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.509726 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:30Z","lastTransitionTime":"2026-01-21T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.612188 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.612238 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.612249 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.612262 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.612274 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:30Z","lastTransitionTime":"2026-01-21T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.612763 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.612792 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.612833 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:30 crc kubenswrapper[4765]: E0121 13:03:30.612911 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:30 crc kubenswrapper[4765]: E0121 13:03:30.612953 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:30 crc kubenswrapper[4765]: E0121 13:03:30.613012 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.695268 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:11:34.306645575 +0000 UTC Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.714075 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.714109 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.714121 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.714154 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.714167 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:30Z","lastTransitionTime":"2026-01-21T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.816821 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.816883 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.816895 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.816915 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.816930 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:30Z","lastTransitionTime":"2026-01-21T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.919355 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.919403 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.919415 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.919433 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:30 crc kubenswrapper[4765]: I0121 13:03:30.919445 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:30Z","lastTransitionTime":"2026-01-21T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.021895 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.021926 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.021936 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.021950 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.021966 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:31Z","lastTransitionTime":"2026-01-21T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.124479 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.124521 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.124531 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.124547 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.124558 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:31Z","lastTransitionTime":"2026-01-21T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.227180 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.227242 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.227255 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.227270 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.227282 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:31Z","lastTransitionTime":"2026-01-21T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.331109 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.331153 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.331163 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.331179 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.331189 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:31Z","lastTransitionTime":"2026-01-21T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.434046 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.434420 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.434436 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.434451 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.434461 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:31Z","lastTransitionTime":"2026-01-21T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.537447 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.537504 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.537517 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.537534 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.537543 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:31Z","lastTransitionTime":"2026-01-21T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.612689 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:31 crc kubenswrapper[4765]: E0121 13:03:31.612828 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.639403 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.639437 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.639449 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.639464 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.639472 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:31Z","lastTransitionTime":"2026-01-21T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.696061 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:33:29.869890333 +0000 UTC Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.742464 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.742497 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.742528 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.742542 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.742553 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:31Z","lastTransitionTime":"2026-01-21T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.845638 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.845713 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.845724 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.845738 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.845749 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:31Z","lastTransitionTime":"2026-01-21T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.948317 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.948364 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.948389 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.948407 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:31 crc kubenswrapper[4765]: I0121 13:03:31.948416 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:31Z","lastTransitionTime":"2026-01-21T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.050921 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.050980 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.050990 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.051004 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.051016 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:32Z","lastTransitionTime":"2026-01-21T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.153775 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.153826 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.153838 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.153855 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.153867 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:32Z","lastTransitionTime":"2026-01-21T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.256059 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.256099 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.256111 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.256129 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.256141 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:32Z","lastTransitionTime":"2026-01-21T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.359138 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.359196 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.359229 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.359254 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.359268 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:32Z","lastTransitionTime":"2026-01-21T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.461871 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.461917 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.461930 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.461945 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.461956 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:32Z","lastTransitionTime":"2026-01-21T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.493948 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.494001 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.494013 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.494030 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.494064 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:32Z","lastTransitionTime":"2026-01-21T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:32 crc kubenswrapper[4765]: E0121 13:03:32.508378 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.512489 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.512534 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.512552 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.512572 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.512585 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:32Z","lastTransitionTime":"2026-01-21T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:32 crc kubenswrapper[4765]: E0121 13:03:32.525341 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.529780 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.529849 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.529868 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.529893 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.529910 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:32Z","lastTransitionTime":"2026-01-21T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:32 crc kubenswrapper[4765]: E0121 13:03:32.543489 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.547329 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.547504 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.547606 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.547713 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.547812 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:32Z","lastTransitionTime":"2026-01-21T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:32 crc kubenswrapper[4765]: E0121 13:03:32.563246 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.568001 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.568341 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.568518 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.568654 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.568746 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:32Z","lastTransitionTime":"2026-01-21T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:32 crc kubenswrapper[4765]: E0121 13:03:32.585676 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:32Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:32 crc kubenswrapper[4765]: E0121 13:03:32.586256 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.588295 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.588433 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.588562 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.588675 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.588794 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:32Z","lastTransitionTime":"2026-01-21T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.612749 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.612799 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:32 crc kubenswrapper[4765]: E0121 13:03:32.612868 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.612897 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:32 crc kubenswrapper[4765]: E0121 13:03:32.612963 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:32 crc kubenswrapper[4765]: E0121 13:03:32.613199 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.691635 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.691694 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.691704 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.691718 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.691727 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:32Z","lastTransitionTime":"2026-01-21T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.696850 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:32:22.877331476 +0000 UTC Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.794770 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.794810 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.794820 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.794836 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.794849 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:32Z","lastTransitionTime":"2026-01-21T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.897840 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.897939 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.897954 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.897980 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:32 crc kubenswrapper[4765]: I0121 13:03:32.897997 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:32Z","lastTransitionTime":"2026-01-21T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.000666 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.000702 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.000715 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.000733 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.000745 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:33Z","lastTransitionTime":"2026-01-21T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.103725 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.103769 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.103779 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.103797 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.103807 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:33Z","lastTransitionTime":"2026-01-21T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.206283 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.206344 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.206361 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.206386 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.206404 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:33Z","lastTransitionTime":"2026-01-21T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.309246 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.309310 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.309327 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.309353 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.309371 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:33Z","lastTransitionTime":"2026-01-21T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.411745 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.411790 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.411803 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.411821 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.411834 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:33Z","lastTransitionTime":"2026-01-21T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.514046 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.514139 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.514159 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.514180 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.514190 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:33Z","lastTransitionTime":"2026-01-21T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.613674 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:33 crc kubenswrapper[4765]: E0121 13:03:33.613869 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.616750 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.616787 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.616797 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.616812 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.616821 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:33Z","lastTransitionTime":"2026-01-21T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.697983 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:46:07.721594063 +0000 UTC Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.719358 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.719428 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.719447 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.719471 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.719487 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:33Z","lastTransitionTime":"2026-01-21T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.825198 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.825272 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.825286 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.825313 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.825327 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:33Z","lastTransitionTime":"2026-01-21T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.928396 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.928459 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.928471 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.928487 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:33 crc kubenswrapper[4765]: I0121 13:03:33.928496 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:33Z","lastTransitionTime":"2026-01-21T13:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.030595 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.030648 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.030658 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.030672 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.030682 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:34Z","lastTransitionTime":"2026-01-21T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.133380 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.133464 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.133486 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.133514 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.133535 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:34Z","lastTransitionTime":"2026-01-21T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.236562 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.236620 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.236633 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.236657 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.236671 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:34Z","lastTransitionTime":"2026-01-21T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.340554 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.340592 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.340603 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.340619 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.340631 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:34Z","lastTransitionTime":"2026-01-21T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.443867 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.443913 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.443925 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.443943 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.443956 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:34Z","lastTransitionTime":"2026-01-21T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.546648 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.546704 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.546721 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.546744 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.546759 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:34Z","lastTransitionTime":"2026-01-21T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.613272 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.613316 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.613272 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:34 crc kubenswrapper[4765]: E0121 13:03:34.613431 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:34 crc kubenswrapper[4765]: E0121 13:03:34.613475 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:34 crc kubenswrapper[4765]: E0121 13:03:34.613536 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.649195 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.649246 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.649255 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.649268 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.649277 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:34Z","lastTransitionTime":"2026-01-21T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.699094 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:14:37.693529667 +0000 UTC Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.751273 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.751323 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.751336 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.751354 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.751366 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:34Z","lastTransitionTime":"2026-01-21T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.853533 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.853583 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.853596 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.853616 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.853626 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:34Z","lastTransitionTime":"2026-01-21T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.956329 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.956361 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.956370 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.956386 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:34 crc kubenswrapper[4765]: I0121 13:03:34.956396 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:34Z","lastTransitionTime":"2026-01-21T13:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.059035 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.059078 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.059087 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.059103 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.059113 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:35Z","lastTransitionTime":"2026-01-21T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.161722 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.161767 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.161778 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.161796 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.161809 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:35Z","lastTransitionTime":"2026-01-21T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.266693 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.266744 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.266764 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.266786 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.266801 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:35Z","lastTransitionTime":"2026-01-21T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.369489 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.369539 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.369556 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.369578 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.369594 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:35Z","lastTransitionTime":"2026-01-21T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.472671 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.472710 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.472718 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.472732 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.472745 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:35Z","lastTransitionTime":"2026-01-21T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.575336 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.575388 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.575400 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.575417 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.575426 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:35Z","lastTransitionTime":"2026-01-21T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.613267 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:35 crc kubenswrapper[4765]: E0121 13:03:35.613445 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.677999 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.678038 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.678050 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.678070 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.678084 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:35Z","lastTransitionTime":"2026-01-21T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.699773 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 13:07:20.069056238 +0000 UTC Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.781294 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.781384 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.781411 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.781450 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.781472 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:35Z","lastTransitionTime":"2026-01-21T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.884324 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.884364 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.884375 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.884394 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.884405 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:35Z","lastTransitionTime":"2026-01-21T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.986872 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.986924 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.986939 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.986960 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:35 crc kubenswrapper[4765]: I0121 13:03:35.986976 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:35Z","lastTransitionTime":"2026-01-21T13:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.089823 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.089877 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.089896 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.089924 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.089941 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:36Z","lastTransitionTime":"2026-01-21T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.192789 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.192886 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.192903 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.192926 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.192939 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:36Z","lastTransitionTime":"2026-01-21T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.296578 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.296618 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.296632 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.296649 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.296660 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:36Z","lastTransitionTime":"2026-01-21T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.321461 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs\") pod \"network-metrics-daemon-4t7jw\" (UID: \"d8dea79f-de5c-4034-9742-c322b723a59c\") " pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:36 crc kubenswrapper[4765]: E0121 13:03:36.321639 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:03:36 crc kubenswrapper[4765]: E0121 13:03:36.321721 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs podName:d8dea79f-de5c-4034-9742-c322b723a59c nodeName:}" failed. No retries permitted until 2026-01-21 13:04:08.321698366 +0000 UTC m=+109.339424218 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs") pod "network-metrics-daemon-4t7jw" (UID: "d8dea79f-de5c-4034-9742-c322b723a59c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.399874 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.399917 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.399929 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.399947 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.399962 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:36Z","lastTransitionTime":"2026-01-21T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.502806 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.502880 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.502894 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.502910 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.502921 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:36Z","lastTransitionTime":"2026-01-21T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.605725 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.605800 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.605831 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.605848 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.605860 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:36Z","lastTransitionTime":"2026-01-21T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.613526 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.613632 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:36 crc kubenswrapper[4765]: E0121 13:03:36.613680 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.613526 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:36 crc kubenswrapper[4765]: E0121 13:03:36.613804 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:36 crc kubenswrapper[4765]: E0121 13:03:36.613844 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.700339 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:50:41.47747224 +0000 UTC Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.711169 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.711255 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.711274 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.711292 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.711303 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:36Z","lastTransitionTime":"2026-01-21T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.814672 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.814713 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.814722 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.814740 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.814750 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:36Z","lastTransitionTime":"2026-01-21T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.917064 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.917129 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.917142 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.917158 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:36 crc kubenswrapper[4765]: I0121 13:03:36.917191 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:36Z","lastTransitionTime":"2026-01-21T13:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.020335 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.020388 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.020398 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.020446 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.020460 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:37Z","lastTransitionTime":"2026-01-21T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.123184 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.123263 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.123278 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.123293 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.123303 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:37Z","lastTransitionTime":"2026-01-21T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.226955 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.227051 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.227068 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.227095 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.227113 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:37Z","lastTransitionTime":"2026-01-21T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.330147 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.330203 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.330240 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.330264 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.330283 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:37Z","lastTransitionTime":"2026-01-21T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.433957 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.434031 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.434048 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.434072 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.434088 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:37Z","lastTransitionTime":"2026-01-21T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.537563 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.537604 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.537616 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.537633 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.537643 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:37Z","lastTransitionTime":"2026-01-21T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.613752 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:37 crc kubenswrapper[4765]: E0121 13:03:37.613985 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.641330 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.641367 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.641378 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.641393 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.641402 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:37Z","lastTransitionTime":"2026-01-21T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.700534 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:54:19.149179193 +0000 UTC Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.743987 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.744039 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.744051 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.744070 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.744082 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:37Z","lastTransitionTime":"2026-01-21T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.846802 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.846847 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.846858 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.846873 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.846882 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:37Z","lastTransitionTime":"2026-01-21T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.949451 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.949501 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.949518 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.949539 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:37 crc kubenswrapper[4765]: I0121 13:03:37.949551 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:37Z","lastTransitionTime":"2026-01-21T13:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.052431 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.052518 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.052539 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.052557 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.052567 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:38Z","lastTransitionTime":"2026-01-21T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.155932 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.156027 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.156043 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.156070 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.156091 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:38Z","lastTransitionTime":"2026-01-21T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.259928 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.259983 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.260000 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.260027 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.260045 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:38Z","lastTransitionTime":"2026-01-21T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.363269 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.363346 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.363367 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.363396 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.363418 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:38Z","lastTransitionTime":"2026-01-21T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.466106 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.466151 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.466159 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.466173 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.466183 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:38Z","lastTransitionTime":"2026-01-21T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.569677 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.569711 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.569720 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.569745 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.569765 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:38Z","lastTransitionTime":"2026-01-21T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.612671 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.612743 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.612797 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:38 crc kubenswrapper[4765]: E0121 13:03:38.612894 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:38 crc kubenswrapper[4765]: E0121 13:03:38.612997 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:38 crc kubenswrapper[4765]: E0121 13:03:38.612815 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.673336 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.673409 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.673435 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.673462 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.673480 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:38Z","lastTransitionTime":"2026-01-21T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.701052 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:51:57.586049351 +0000 UTC Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.776250 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.776289 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.776300 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.776321 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.776333 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:38Z","lastTransitionTime":"2026-01-21T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.879454 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.879487 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.879500 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.879519 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.879530 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:38Z","lastTransitionTime":"2026-01-21T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.981749 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.981801 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.981810 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.981828 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:38 crc kubenswrapper[4765]: I0121 13:03:38.981838 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:38Z","lastTransitionTime":"2026-01-21T13:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.084369 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.084424 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.084435 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.084455 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.084466 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:39Z","lastTransitionTime":"2026-01-21T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.187225 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.187268 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.187279 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.187297 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.187307 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:39Z","lastTransitionTime":"2026-01-21T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.291130 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.291275 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.291315 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.291361 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.291399 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:39Z","lastTransitionTime":"2026-01-21T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.395310 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.395371 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.395384 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.395401 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.395412 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:39Z","lastTransitionTime":"2026-01-21T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.498806 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.498846 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.498855 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.498874 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.498885 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:39Z","lastTransitionTime":"2026-01-21T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.601508 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.601565 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.601576 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.601601 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.601619 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:39Z","lastTransitionTime":"2026-01-21T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.613299 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:39 crc kubenswrapper[4765]: E0121 13:03:39.613538 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.627944 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.643585 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.662761 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6dfd8de5f2fe457fc5e2f9274dc0badfe9851193e00bc206ffc03d4add302b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:07Z\\\",\\\"message\\\":\\\"s/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.627832 6124 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 13:03:07.628081 6124 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.628078 6124 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.627796 6124 factory.go:656] Stopping watch factory\\\\nI0121 13:03:07.628593 6124 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.628834 6124 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 13:03:07.629259 6124 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 13:03:07.629537 6124 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 13:03:07.629907 6124 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:25Z\\\",\\\"message\\\":\\\"03:24.912748 6345 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 13:03:24.912867 6345 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.675254 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.689287 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.701743 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 09:25:04.087933573 +0000 UTC Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.702540 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.704498 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.704568 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.704583 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.704599 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.704638 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:39Z","lastTransitionTime":"2026-01-21T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.716275 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.733131 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.746730 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.757921 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.771385 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.780181 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.789672 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.806523 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.806565 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.806579 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.806596 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.806605 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:39Z","lastTransitionTime":"2026-01-21T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.808865 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.818933 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.830579 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8b07e5-b316-45ac-8511-cb09b9d4d3bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5111055c302cebecfb649ba86b3c51d36213cdbebe7c90c5aadea87dc93399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a549a3dc26287c8cab6ffaaf643a3b7a9aee3ba27f10f0741c11412d152b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e73f5c3b6b993cba5ad746efdbe1e24cb5bd1ac653a80d6c47eaaff07d917eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.842673 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014b5379-702d-46a3-a4c7-081c286a5c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4bb3739eb8cd7744b7117f4db0817ff3feb326f9016dedb4bfb5dc0614ed0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.858804 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:39Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.909697 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.909754 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.909770 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.909791 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:39 crc kubenswrapper[4765]: I0121 13:03:39.909804 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:39Z","lastTransitionTime":"2026-01-21T13:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.012505 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.012564 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.012577 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.012598 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.012612 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:40Z","lastTransitionTime":"2026-01-21T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.116385 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.116427 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.116437 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.116455 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.116486 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:40Z","lastTransitionTime":"2026-01-21T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.219053 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.219113 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.219126 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.219146 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.219159 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:40Z","lastTransitionTime":"2026-01-21T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.323274 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.323346 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.323356 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.323373 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.323383 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:40Z","lastTransitionTime":"2026-01-21T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.425791 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.425847 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.425863 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.425883 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.425894 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:40Z","lastTransitionTime":"2026-01-21T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.528624 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.528664 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.528676 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.528695 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.528706 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:40Z","lastTransitionTime":"2026-01-21T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.613340 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.613402 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.613501 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:40 crc kubenswrapper[4765]: E0121 13:03:40.613637 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:40 crc kubenswrapper[4765]: E0121 13:03:40.613793 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:40 crc kubenswrapper[4765]: E0121 13:03:40.613887 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.631702 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.631824 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.631846 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.631868 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.631881 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:40Z","lastTransitionTime":"2026-01-21T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.703011 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 22:18:25.332578601 +0000 UTC Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.736269 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.736310 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.736319 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.736334 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.736344 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:40Z","lastTransitionTime":"2026-01-21T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.839837 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.839903 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.839928 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.839958 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.839983 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:40Z","lastTransitionTime":"2026-01-21T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.943697 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.943765 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.943784 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.943812 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:40 crc kubenswrapper[4765]: I0121 13:03:40.943829 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:40Z","lastTransitionTime":"2026-01-21T13:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.047892 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.047965 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.047993 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.048027 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.048050 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:41Z","lastTransitionTime":"2026-01-21T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.150904 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.150967 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.150983 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.151004 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.151020 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:41Z","lastTransitionTime":"2026-01-21T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.254082 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.254145 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.254159 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.254177 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.254194 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:41Z","lastTransitionTime":"2026-01-21T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.357263 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.357313 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.357324 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.357341 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.357353 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:41Z","lastTransitionTime":"2026-01-21T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.460053 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.460087 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.460098 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.460114 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.460125 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:41Z","lastTransitionTime":"2026-01-21T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.562993 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.563053 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.563071 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.563099 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.563118 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:41Z","lastTransitionTime":"2026-01-21T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.613452 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.614351 4765 scope.go:117] "RemoveContainer" containerID="3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1" Jan 21 13:03:41 crc kubenswrapper[4765]: E0121 13:03:41.614693 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:41 crc kubenswrapper[4765]: E0121 13:03:41.614702 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x677d_openshift-ovn-kubernetes(cd80c14d-ebec-4d65-8116-149400d6f8be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.633606 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.652842 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.666546 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.666596 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.666611 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.666631 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.666644 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:41Z","lastTransitionTime":"2026-01-21T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.666887 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.684655 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.703705 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 21:22:42.331890602 +0000 UTC Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.708867 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:25Z\\\",\\\"message\\\":\\\"03:24.912748 6345 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 13:03:24.912867 6345 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x677d_openshift-ovn-kubernetes(cd80c14d-ebec-4d65-8116-149400d6f8be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.723849 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.742590 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.756296 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.769855 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.769878 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.770029 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.770041 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.770057 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.770070 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:41Z","lastTransitionTime":"2026-01-21T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.781154 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.794056 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.804371 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.815980 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.826711 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014b5379-702d-46a3-a4c7-081c286a5c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4bb3739eb8cd7744b7117f4db0817ff3feb326f9016dedb4bfb5dc0614ed0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.842775 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.856416 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.867991 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.871996 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.872047 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.872058 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.872076 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.872089 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:41Z","lastTransitionTime":"2026-01-21T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.880712 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8b07e5-b316-45ac-8511-cb09b9d4d3bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5111055c302cebecfb649ba86b3c51d36213cdbebe7c90c5aadea87dc93399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a549a3dc26287c8cab6ffaaf643a3b7a9aee3ba27f10f0741c11412d152b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e73f5c3b6b993cba5ad746efdbe1e24cb5bd1ac653a80d6c47eaaff07d917eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:41Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.975102 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.975135 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.975144 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.975158 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:41 crc kubenswrapper[4765]: I0121 13:03:41.975166 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:41Z","lastTransitionTime":"2026-01-21T13:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.078332 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.078375 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.078385 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.078400 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.078412 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:42Z","lastTransitionTime":"2026-01-21T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.180856 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.180904 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.180916 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.180933 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.180945 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:42Z","lastTransitionTime":"2026-01-21T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.283438 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.283488 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.283497 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.283512 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.283522 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:42Z","lastTransitionTime":"2026-01-21T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.387106 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.387174 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.387188 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.387215 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.387273 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:42Z","lastTransitionTime":"2026-01-21T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.491082 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.491142 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.491161 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.491182 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.491198 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:42Z","lastTransitionTime":"2026-01-21T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.594065 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.594121 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.594143 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.594201 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.594278 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:42Z","lastTransitionTime":"2026-01-21T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.613397 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.613474 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:42 crc kubenswrapper[4765]: E0121 13:03:42.613594 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:42 crc kubenswrapper[4765]: E0121 13:03:42.613693 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.614050 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:42 crc kubenswrapper[4765]: E0121 13:03:42.614257 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.697115 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.697156 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.697167 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.697187 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.697198 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:42Z","lastTransitionTime":"2026-01-21T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.704284 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 09:51:44.907740894 +0000 UTC Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.800056 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.800095 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.800105 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.800121 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.800132 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:42Z","lastTransitionTime":"2026-01-21T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.902824 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.902867 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.902878 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.902894 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.902904 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:42Z","lastTransitionTime":"2026-01-21T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.940056 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.940317 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.940462 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.940488 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.940497 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:42Z","lastTransitionTime":"2026-01-21T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:42 crc kubenswrapper[4765]: E0121 13:03:42.961990 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:42Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.966125 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.966162 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.966173 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.966189 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.966202 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:42Z","lastTransitionTime":"2026-01-21T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:42 crc kubenswrapper[4765]: E0121 13:03:42.979011 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:42Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.983125 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.983177 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.983192 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.983241 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:42 crc kubenswrapper[4765]: I0121 13:03:42.983257 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:42Z","lastTransitionTime":"2026-01-21T13:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:42 crc kubenswrapper[4765]: E0121 13:03:42.998617 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:42Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.002253 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.002275 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.002282 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.002298 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.002306 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:43Z","lastTransitionTime":"2026-01-21T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:43 crc kubenswrapper[4765]: E0121 13:03:43.017720 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.022304 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.022373 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.022387 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.022410 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.022425 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:43Z","lastTransitionTime":"2026-01-21T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:43 crc kubenswrapper[4765]: E0121 13:03:43.037416 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: E0121 13:03:43.037585 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.039683 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.039715 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.039724 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.039742 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.039753 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:43Z","lastTransitionTime":"2026-01-21T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.143087 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.143133 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.143142 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.143162 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.143172 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:43Z","lastTransitionTime":"2026-01-21T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.246606 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.246656 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.246665 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.246682 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.246694 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:43Z","lastTransitionTime":"2026-01-21T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.349956 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.350003 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.350016 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.350034 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.350047 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:43Z","lastTransitionTime":"2026-01-21T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.456651 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.456697 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.456708 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.456726 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.456739 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:43Z","lastTransitionTime":"2026-01-21T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.559707 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.559763 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.559776 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.559795 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.559808 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:43Z","lastTransitionTime":"2026-01-21T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.613577 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:43 crc kubenswrapper[4765]: E0121 13:03:43.613715 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.661917 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.661953 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.661962 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.661977 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.661988 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:43Z","lastTransitionTime":"2026-01-21T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.705382 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:44:48.154731352 +0000 UTC Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.748296 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bplfq_d9b9a5be-6b15-46d2-8715-506efdae8ae7/kube-multus/0.log" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.748345 4765 generic.go:334] "Generic (PLEG): container finished" podID="d9b9a5be-6b15-46d2-8715-506efdae8ae7" containerID="9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce" exitCode=1 Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.748390 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bplfq" event={"ID":"d9b9a5be-6b15-46d2-8715-506efdae8ae7","Type":"ContainerDied","Data":"9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce"} Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.748914 4765 scope.go:117] "RemoveContainer" containerID="9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.763143 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.765645 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.765688 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.765698 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.765713 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.765722 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:43Z","lastTransitionTime":"2026-01-21T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.779395 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.799152 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:25Z\\\",\\\"message\\\":\\\"03:24.912748 6345 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 13:03:24.912867 6345 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x677d_openshift-ovn-kubernetes(cd80c14d-ebec-4d65-8116-149400d6f8be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.815926 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.833665 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.847053 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.863693 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.868055 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.868104 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.868116 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.868132 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.868142 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:43Z","lastTransitionTime":"2026-01-21T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.880146 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"message\\\":\\\"2026-01-21T13:02:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0aec8178-382a-4cf6-b094-9b944d46848b\\\\n2026-01-21T13:02:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0aec8178-382a-4cf6-b094-9b944d46848b to /host/opt/cni/bin/\\\\n2026-01-21T13:02:58Z [verbose] multus-daemon started\\\\n2026-01-21T13:02:58Z [verbose] Readiness Indicator file check\\\\n2026-01-21T13:03:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.896686 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.911029 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.925721 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.938506 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.950135 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.962770 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.971420 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.971464 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.971477 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.971495 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.971505 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:43Z","lastTransitionTime":"2026-01-21T13:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.975585 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.986901 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8b07e5-b316-45ac-8511-cb09b9d4d3bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5111055c302cebecfb649ba86b3c51d36213cdbebe7c90c5aadea87dc93399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a549a3dc26287c8cab6ffaaf643a3b7a9aee3ba27f10f0741c11412d152b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e73f5c3b6b993cba5ad746efdbe1e24cb5bd1ac653a80d6c47eaaff07d917eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:43 crc kubenswrapper[4765]: I0121 13:03:43.996210 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014b5379-702d-46a3-a4c7-081c286a5c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4bb3739eb8cd7744b7117f4db0817ff3feb326f9016dedb4bfb5dc0614ed0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:43Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.009073 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.074444 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.074752 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.074821 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.074899 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.074957 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:44Z","lastTransitionTime":"2026-01-21T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.177816 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.177877 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.177891 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.177912 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.177927 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:44Z","lastTransitionTime":"2026-01-21T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.280805 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.281068 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.281180 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.281303 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.281413 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:44Z","lastTransitionTime":"2026-01-21T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.383800 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.384349 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.384434 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.384514 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.384590 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:44Z","lastTransitionTime":"2026-01-21T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.487238 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.487281 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.487290 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.487313 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.487323 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:44Z","lastTransitionTime":"2026-01-21T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.591136 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.591566 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.591660 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.591754 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.591851 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:44Z","lastTransitionTime":"2026-01-21T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.612929 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.613026 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:44 crc kubenswrapper[4765]: E0121 13:03:44.613145 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:44 crc kubenswrapper[4765]: E0121 13:03:44.613317 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.613653 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:44 crc kubenswrapper[4765]: E0121 13:03:44.613844 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.694254 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.694567 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.694710 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.694827 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.694918 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:44Z","lastTransitionTime":"2026-01-21T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.705836 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 08:36:26.961106933 +0000 UTC Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.754370 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bplfq_d9b9a5be-6b15-46d2-8715-506efdae8ae7/kube-multus/0.log" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.754442 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bplfq" event={"ID":"d9b9a5be-6b15-46d2-8715-506efdae8ae7","Type":"ContainerStarted","Data":"79123ef5ce55b0a6e560030a8178ca3e5f52456eca3c33dc0598e5612c71fa3f"} Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.765199 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.776620 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8b07e5-b316-45ac-8511-cb09b9d4d3bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5111055c302cebecfb649ba86b3c51d36213cdbebe7c90c5aadea87dc93399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a549a3dc26287c8cab6ffaaf643a3b7a9aee3ba27f10f0741c11412d152b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e73f5c3b6b993cba5ad746efdbe1e24cb5bd1ac653a80d6c47eaaff07d917eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.789018 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014b5379-702d-46a3-a4c7-081c286a5c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4bb3739eb8cd7744b7117f4db0817ff3feb326f9016dedb4bfb5dc0614ed0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.798312 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.798370 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.798382 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.798406 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.798418 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:44Z","lastTransitionTime":"2026-01-21T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.802528 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.816185 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.829895 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.851729 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:25Z\\\",\\\"message\\\":\\\"03:24.912748 6345 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 13:03:24.912867 6345 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x677d_openshift-ovn-kubernetes(cd80c14d-ebec-4d65-8116-149400d6f8be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.869985 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.887254 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.901054 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.901084 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.901094 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.901109 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.901120 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:44Z","lastTransitionTime":"2026-01-21T13:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.906555 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.923657 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.940265 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.959526 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79123ef5ce55b0a6e560030a8178ca3e5f52456eca3c33dc0598e5612c71fa3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"message\\\":\\\"2026-01-21T13:02:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0aec8178-382a-4cf6-b094-9b944d46848b\\\\n2026-01-21T13:02:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0aec8178-382a-4cf6-b094-9b944d46848b to /host/opt/cni/bin/\\\\n2026-01-21T13:02:58Z [verbose] multus-daemon started\\\\n2026-01-21T13:02:58Z [verbose] Readiness Indicator file check\\\\n2026-01-21T13:03:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.975607 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.988809 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:44 crc kubenswrapper[4765]: I0121 13:03:44.999254 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:44Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.004986 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.005039 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.005058 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.005085 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.005101 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:45Z","lastTransitionTime":"2026-01-21T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.010886 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:45Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.028191 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:45Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.108358 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.108425 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.108444 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.108471 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.108496 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:45Z","lastTransitionTime":"2026-01-21T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.211020 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.211085 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.211097 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.211117 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.211159 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:45Z","lastTransitionTime":"2026-01-21T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.314541 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.314616 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.314634 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.314659 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.314678 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:45Z","lastTransitionTime":"2026-01-21T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.417795 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.417867 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.417888 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.417914 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.417932 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:45Z","lastTransitionTime":"2026-01-21T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.520864 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.520940 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.520974 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.521006 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.521027 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:45Z","lastTransitionTime":"2026-01-21T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.531719 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.531865 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.531924 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.531975 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:45 crc kubenswrapper[4765]: E0121 13:03:45.532005 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:49.531968271 +0000 UTC m=+150.549694143 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.532057 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:45 crc kubenswrapper[4765]: E0121 13:03:45.532090 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:03:45 crc kubenswrapper[4765]: E0121 13:03:45.532091 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:03:45 crc kubenswrapper[4765]: E0121 13:03:45.532138 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:03:45 crc kubenswrapper[4765]: E0121 13:03:45.532169 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:04:49.532147486 +0000 UTC m=+150.549873358 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:03:45 crc kubenswrapper[4765]: E0121 13:03:45.532179 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:03:45 crc kubenswrapper[4765]: E0121 13:03:45.532203 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:04:49.532185537 +0000 UTC m=+150.549911399 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:03:45 crc kubenswrapper[4765]: E0121 13:03:45.532205 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:03:45 crc kubenswrapper[4765]: E0121 13:03:45.532330 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:03:45 crc kubenswrapper[4765]: E0121 13:03:45.532368 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:03:45 crc kubenswrapper[4765]: E0121 13:03:45.532392 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:03:45 crc kubenswrapper[4765]: E0121 13:03:45.532338 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:04:49.53231867 +0000 UTC m=+150.550044522 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:03:45 crc kubenswrapper[4765]: E0121 13:03:45.532476 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:04:49.532455094 +0000 UTC m=+150.550180966 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.613351 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:45 crc kubenswrapper[4765]: E0121 13:03:45.613593 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.624708 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.624785 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.624812 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.624841 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.624864 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:45Z","lastTransitionTime":"2026-01-21T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.706250 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:11:45.100115495 +0000 UTC Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.727580 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.727648 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.727659 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.727676 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.727687 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:45Z","lastTransitionTime":"2026-01-21T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.830709 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.830758 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.830770 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.830790 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.830804 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:45Z","lastTransitionTime":"2026-01-21T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.933720 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.933766 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.933778 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.933795 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:45 crc kubenswrapper[4765]: I0121 13:03:45.933811 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:45Z","lastTransitionTime":"2026-01-21T13:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.036261 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.036309 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.036318 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.036333 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.036345 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:46Z","lastTransitionTime":"2026-01-21T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.138848 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.139330 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.139445 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.139543 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.139637 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:46Z","lastTransitionTime":"2026-01-21T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.242603 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.242675 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.242698 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.242728 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.242753 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:46Z","lastTransitionTime":"2026-01-21T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.346007 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.346333 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.346420 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.346567 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.346689 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:46Z","lastTransitionTime":"2026-01-21T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.450292 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.450338 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.450354 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.450375 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.450393 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:46Z","lastTransitionTime":"2026-01-21T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.553275 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.553671 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.553744 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.553810 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.553878 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:46Z","lastTransitionTime":"2026-01-21T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.613773 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.613815 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.613795 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:46 crc kubenswrapper[4765]: E0121 13:03:46.613970 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:46 crc kubenswrapper[4765]: E0121 13:03:46.614351 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:46 crc kubenswrapper[4765]: E0121 13:03:46.614614 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.656703 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.656777 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.656793 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.656818 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.656830 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:46Z","lastTransitionTime":"2026-01-21T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.707093 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:06:53.533280166 +0000 UTC Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.759657 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.759718 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.759728 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.759742 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.759778 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:46Z","lastTransitionTime":"2026-01-21T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.864280 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.864346 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.864363 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.864387 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.864404 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:46Z","lastTransitionTime":"2026-01-21T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.967466 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.967540 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.967567 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.967602 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:46 crc kubenswrapper[4765]: I0121 13:03:46.967628 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:46Z","lastTransitionTime":"2026-01-21T13:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.070861 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.070930 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.070949 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.070977 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.070999 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:47Z","lastTransitionTime":"2026-01-21T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.173718 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.173776 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.173789 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.173810 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.173822 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:47Z","lastTransitionTime":"2026-01-21T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.277536 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.277582 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.277597 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.277622 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.277633 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:47Z","lastTransitionTime":"2026-01-21T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.380501 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.380552 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.380561 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.380581 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.380592 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:47Z","lastTransitionTime":"2026-01-21T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.483295 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.483340 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.483351 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.483368 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.483379 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:47Z","lastTransitionTime":"2026-01-21T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.587056 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.587115 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.587130 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.587153 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.587171 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:47Z","lastTransitionTime":"2026-01-21T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.613647 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:47 crc kubenswrapper[4765]: E0121 13:03:47.613856 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.690154 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.690247 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.690273 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.690300 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.690318 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:47Z","lastTransitionTime":"2026-01-21T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.707537 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 14:41:02.865134621 +0000 UTC Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.792930 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.792958 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.792966 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.792979 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.792989 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:47Z","lastTransitionTime":"2026-01-21T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.895535 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.895584 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.895599 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.895619 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.895634 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:47Z","lastTransitionTime":"2026-01-21T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.998529 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.998603 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.998631 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.998657 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:47 crc kubenswrapper[4765]: I0121 13:03:47.998674 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:47Z","lastTransitionTime":"2026-01-21T13:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.100912 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.100996 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.101024 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.101056 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.101082 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:48Z","lastTransitionTime":"2026-01-21T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.204056 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.204145 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.204170 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.204250 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.204277 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:48Z","lastTransitionTime":"2026-01-21T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.308430 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.308492 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.308509 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.308531 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.308543 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:48Z","lastTransitionTime":"2026-01-21T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.411546 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.411600 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.411621 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.411650 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.411668 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:48Z","lastTransitionTime":"2026-01-21T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.514913 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.514965 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.514978 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.515000 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.515014 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:48Z","lastTransitionTime":"2026-01-21T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.613753 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.613832 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:48 crc kubenswrapper[4765]: E0121 13:03:48.613984 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.614169 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:48 crc kubenswrapper[4765]: E0121 13:03:48.614322 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:48 crc kubenswrapper[4765]: E0121 13:03:48.614503 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.618710 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.618759 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.618777 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.618804 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.618827 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:48Z","lastTransitionTime":"2026-01-21T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.708457 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:48:20.863588836 +0000 UTC Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.721923 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.721970 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.721980 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.721999 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.722013 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:48Z","lastTransitionTime":"2026-01-21T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.824354 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.824431 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.824449 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.824470 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.824485 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:48Z","lastTransitionTime":"2026-01-21T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.926783 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.926835 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.926850 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.926872 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:48 crc kubenswrapper[4765]: I0121 13:03:48.926887 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:48Z","lastTransitionTime":"2026-01-21T13:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.029841 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.029892 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.029908 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.029927 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.029941 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:49Z","lastTransitionTime":"2026-01-21T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.133396 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.133426 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.133435 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.133449 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.133458 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:49Z","lastTransitionTime":"2026-01-21T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.236376 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.236435 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.236446 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.236467 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.236478 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:49Z","lastTransitionTime":"2026-01-21T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.340922 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.341007 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.341018 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.341066 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.341078 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:49Z","lastTransitionTime":"2026-01-21T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.444418 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.444503 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.444521 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.444541 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.444554 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:49Z","lastTransitionTime":"2026-01-21T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.547296 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.547326 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.547336 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.547353 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.547368 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:49Z","lastTransitionTime":"2026-01-21T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.612786 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:49 crc kubenswrapper[4765]: E0121 13:03:49.612947 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.628464 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef8b07e5-b316-45ac-8511-cb09b9d4d3bd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a5111055c302cebecfb649ba86b3c51d36213cdbebe7c90c5aadea87dc93399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68a549a3dc26287c8cab6ffaaf643a3b7a9aee3ba27f10f0741c11412d152b69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e73f5c3b6b993cba5ad746efdbe1e24cb5bd1ac653a80d6c47eaaff07d917eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7da727b5144c78dde36f993da463ece39fe97fadf5d3c44302b0201661a2411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.645401 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014b5379-702d-46a3-a4c7-081c286a5c61\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d4bb3739eb8cd7744b7117f4db0817ff3feb326f9016dedb4bfb5dc0614ed0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55035304fca2b0d4e18e762b3ae515727a123c3ad9fd7eb44e2672a8881bad8c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.656939 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.657001 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.657020 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.657040 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.657051 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:49Z","lastTransitionTime":"2026-01-21T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.665683 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1981c521-dcec-4302-b34b-4464c8ebf331\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T13:02:40Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 13:02:32.845326 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 13:02:32.846754 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2089730151/tls.crt::/tmp/serving-cert-2089730151/tls.key\\\\\\\"\\\\nI0121 13:02:40.221562 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 13:02:40.224572 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 13:02:40.224598 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 13:02:40.224644 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 13:02:40.224652 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 13:02:40.416048 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 13:02:40.416100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416311 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 13:02:40.416318 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 13:02:40.416323 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 13:02:40.416328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 13:02:40.416331 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 13:02:40.416719 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 13:02:40.419458 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.682977 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.695834 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd4a2a03-192d-4335-b808-aa313f573870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c0f55316e61a0931a6161bae43ea36d8532047de33c889e20be29a02c25891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5c74967cc788fc76fc179316f12f3f187091ab62b92a4de4e62bf76bfe1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qcr5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvtm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.709524 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:45:30.557539631 +0000 UTC Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.710344 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e8e71a1cd7c6f9d67e3013da8df5ca539789ebf79ac08f75508d8a407a2938f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.730085 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ceb735bfebd83042490340e22ba1e056c803939740abb93beafe26cff6913b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a53f8f688c74c62456b7f9685c08abe0fa6ed2c4611609a4d1e0d8a4cf70db5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.744458 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5081c2a070886bcaea8df86970fab3671fa626aa4c56cf6a5d9e379b695f3289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.760838 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.760882 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.760893 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.760910 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.760920 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:49Z","lastTransitionTime":"2026-01-21T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.760993 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z68f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f3d99e-f58c-4caa-be45-b879c6b614d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e7ea5517a7397813e068f0d9bdac688d17d12cfe232c604d4b47ec4e044ab9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc149d0951ee2018ed6648fe03ecba088143ac1537721ea1e5ad1278fcda090c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a28e1c4e4f90b4cf5bda0d13e790a6ba0ae0f69ec2f6d2b5da4367d4eefcd7ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b01dc191b6fb93180e3df796f302be6ea0c1e3707c16b64833270f85e4964f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49947bb9e6684be3057341d2d9bfab4ecf93f22f439e1c5e1f97bd2d47b9f0f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f6342fa1f50988b07be9fd066e7205716ea499197bf2cd8e41cfd52b04520b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c8cd2d8379a67580de870d4aa2637a807b5806a4220e42fb3268faf325b40a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:03:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z68f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.781830 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd80c14d-ebec-4d65-8116-149400d6f8be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:25Z\\\",\\\"message\\\":\\\"03:24.912748 6345 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 13:03:24.912867 6345 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:24Z is after 2025-08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:03:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x677d_openshift-ovn-kubernetes(cd80c14d-ebec-4d65-8116-149400d6f8be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T13:02:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q9t46\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x677d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.804024 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d79a624-0b42-4cfc-93a1-bed35cde22f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af415a2b1a600bed15ff24922877360315ed9d18903916dbf1ef95089ca42a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f1f725a4034955aeaf81ba9db97f3d51017a006b64b8654376aeedb20f40bdc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8bc65c4e2910ee16cf31b563f5bdd9bc3facceaedc690c33afa307e6eba06c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.821589 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.834986 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bplfq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9b9a5be-6b15-46d2-8715-506efdae8ae7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79123ef5ce55b0a6e560030a8178ca3e5f52456eca3c33dc0598e5612c71fa3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T13:03:43Z\\\",\\\"message\\\":\\\"2026-01-21T13:02:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0aec8178-382a-4cf6-b094-9b944d46848b\\\\n2026-01-21T13:02:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0aec8178-382a-4cf6-b094-9b944d46848b to /host/opt/cni/bin/\\\\n2026-01-21T13:02:58Z [verbose] multus-daemon started\\\\n2026-01-21T13:02:58Z [verbose] Readiness Indicator file check\\\\n2026-01-21T13:03:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs4dl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:46Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bplfq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.848944 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.860605 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gmkg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d638a9b-eb82-48af-bf7a-dbfc68b5c931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68626e1c3070976f081915c9e5162ae189e6822d3ef239fa03939c2c72e0cd02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxkj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gmkg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.863537 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.863564 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.863573 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.863589 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.863599 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:49Z","lastTransitionTime":"2026-01-21T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.872214 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w5x22" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9c0a4a-ca94-4ab6-a7c2-f1aa14cf01ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8c2c1ddda92e0156a51649ca8d3d13aad885d049008650a0d409d510ce177af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52z2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w5x22\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.882417 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e149390c-e4da-4dfd-bed2-b14de058f921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:02:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad6758f072a07b4d53cee6c72ee6f4921fc43944bb63db3de5926dd671ab47b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T13:02:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vprvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:02:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v72nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.892751 4765 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8dea79f-de5c-4034-9742-c322b723a59c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rzchv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T13:03:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t7jw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:49Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.966183 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.966229 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.966240 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.966255 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:49 crc kubenswrapper[4765]: I0121 13:03:49.966265 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:49Z","lastTransitionTime":"2026-01-21T13:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.068283 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.068358 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.068409 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.068430 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.068443 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:50Z","lastTransitionTime":"2026-01-21T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.171194 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.171455 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.171471 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.171490 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.171500 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:50Z","lastTransitionTime":"2026-01-21T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.273912 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.273968 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.273986 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.274008 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.274027 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:50Z","lastTransitionTime":"2026-01-21T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.376440 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.376482 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.376491 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.376506 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.376517 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:50Z","lastTransitionTime":"2026-01-21T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.479937 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.480004 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.480016 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.480051 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.480064 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:50Z","lastTransitionTime":"2026-01-21T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.582969 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.583052 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.583070 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.583094 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.583129 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:50Z","lastTransitionTime":"2026-01-21T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.613112 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.613188 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:50 crc kubenswrapper[4765]: E0121 13:03:50.613295 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.613224 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:50 crc kubenswrapper[4765]: E0121 13:03:50.613449 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:50 crc kubenswrapper[4765]: E0121 13:03:50.613598 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.685697 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.685791 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.685811 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.685838 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.685859 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:50Z","lastTransitionTime":"2026-01-21T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.710022 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 11:28:52.594902221 +0000 UTC Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.787864 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.787940 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.787959 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.787990 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.788010 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:50Z","lastTransitionTime":"2026-01-21T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.891174 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.891278 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.891293 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.891314 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.891329 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:50Z","lastTransitionTime":"2026-01-21T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.993817 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.993866 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.993877 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.993893 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:50 crc kubenswrapper[4765]: I0121 13:03:50.993903 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:50Z","lastTransitionTime":"2026-01-21T13:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.096484 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.096528 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.096540 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.096556 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.096568 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:51Z","lastTransitionTime":"2026-01-21T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.199936 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.199998 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.200012 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.200033 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.200049 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:51Z","lastTransitionTime":"2026-01-21T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.302880 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.302938 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.302952 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.302968 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.302979 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:51Z","lastTransitionTime":"2026-01-21T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.406452 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.406547 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.406570 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.406603 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.406626 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:51Z","lastTransitionTime":"2026-01-21T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.509155 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.509239 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.509270 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.509290 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.509301 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:51Z","lastTransitionTime":"2026-01-21T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.612133 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.612282 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.612309 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.612347 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.612368 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:51Z","lastTransitionTime":"2026-01-21T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.612692 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:51 crc kubenswrapper[4765]: E0121 13:03:51.612974 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.711035 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:06:24.144710215 +0000 UTC Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.715369 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.715438 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.715454 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.715476 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.715489 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:51Z","lastTransitionTime":"2026-01-21T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.819248 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.819503 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.819533 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.819555 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.819575 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:51Z","lastTransitionTime":"2026-01-21T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.922512 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.923558 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.923606 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.923633 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:51 crc kubenswrapper[4765]: I0121 13:03:51.923654 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:51Z","lastTransitionTime":"2026-01-21T13:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.026810 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.026877 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.026889 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.026907 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.026919 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:52Z","lastTransitionTime":"2026-01-21T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.130061 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.130109 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.130120 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.130137 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.130151 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:52Z","lastTransitionTime":"2026-01-21T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.232375 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.232426 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.232435 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.232449 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.232459 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:52Z","lastTransitionTime":"2026-01-21T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.335664 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.335730 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.335744 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.335763 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.335777 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:52Z","lastTransitionTime":"2026-01-21T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.438579 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.438624 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.438634 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.438652 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.438663 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:52Z","lastTransitionTime":"2026-01-21T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.541424 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.541482 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.541497 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.541521 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.541537 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:52Z","lastTransitionTime":"2026-01-21T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.612725 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.612818 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:52 crc kubenswrapper[4765]: E0121 13:03:52.612895 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.612919 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:52 crc kubenswrapper[4765]: E0121 13:03:52.613060 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:52 crc kubenswrapper[4765]: E0121 13:03:52.613180 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.644889 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.644934 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.644944 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.644960 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.644973 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:52Z","lastTransitionTime":"2026-01-21T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.711903 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 23:51:24.744348331 +0000 UTC Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.747697 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.747763 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.747780 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.747801 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.747819 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:52Z","lastTransitionTime":"2026-01-21T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.851013 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.851073 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.851085 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.851107 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.851120 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:52Z","lastTransitionTime":"2026-01-21T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.953695 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.953744 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.953754 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.953772 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:52 crc kubenswrapper[4765]: I0121 13:03:52.953783 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:52Z","lastTransitionTime":"2026-01-21T13:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.056927 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.057004 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.057017 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.057038 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.057055 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:53Z","lastTransitionTime":"2026-01-21T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.160351 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.160397 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.160408 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.160423 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.160436 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:53Z","lastTransitionTime":"2026-01-21T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.199121 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.199165 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.199175 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.199193 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.199465 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:53Z","lastTransitionTime":"2026-01-21T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:53 crc kubenswrapper[4765]: E0121 13:03:53.213511 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:53Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.217601 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.217620 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.217630 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.217645 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.217656 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:53Z","lastTransitionTime":"2026-01-21T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:53 crc kubenswrapper[4765]: E0121 13:03:53.230350 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:53Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.234437 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.234464 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.234473 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.234489 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.234503 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:53Z","lastTransitionTime":"2026-01-21T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:53 crc kubenswrapper[4765]: E0121 13:03:53.249047 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:53Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.260977 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.261019 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.261029 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.261050 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.261070 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:53Z","lastTransitionTime":"2026-01-21T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:53 crc kubenswrapper[4765]: E0121 13:03:53.276764 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:53Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.281558 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.281639 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.281653 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.281671 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.281686 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:53Z","lastTransitionTime":"2026-01-21T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:53 crc kubenswrapper[4765]: E0121 13:03:53.295524 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T13:03:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6701690f-553a-4c5a-9946-a680675a0350\\\",\\\"systemUUID\\\":\\\"66943250-b7ae-4c71-9b94-062a3ddaf203\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T13:03:53Z is after 2025-08-24T17:21:41Z" Jan 21 13:03:53 crc kubenswrapper[4765]: E0121 13:03:53.295641 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.297578 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.297612 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.297624 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.297643 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.297655 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:53Z","lastTransitionTime":"2026-01-21T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.400547 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.400609 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.400623 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.400645 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.400659 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:53Z","lastTransitionTime":"2026-01-21T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.503263 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.503317 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.503329 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.503346 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.503361 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:53Z","lastTransitionTime":"2026-01-21T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.605864 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.605903 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.605912 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.605926 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.605935 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:53Z","lastTransitionTime":"2026-01-21T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.613237 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:53 crc kubenswrapper[4765]: E0121 13:03:53.613438 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.630408 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.708780 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.708831 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.708842 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.708859 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.709170 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:53Z","lastTransitionTime":"2026-01-21T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.712926 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 03:47:39.100000608 +0000 UTC Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.812845 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.812910 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.812934 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.812964 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.812988 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:53Z","lastTransitionTime":"2026-01-21T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.915987 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.916054 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.916069 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.916084 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:53 crc kubenswrapper[4765]: I0121 13:03:53.916097 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:53Z","lastTransitionTime":"2026-01-21T13:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.019759 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.019815 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.019828 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.019847 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.019860 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:54Z","lastTransitionTime":"2026-01-21T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.122526 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.122595 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.122609 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.122631 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.122645 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:54Z","lastTransitionTime":"2026-01-21T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.226124 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.226577 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.226597 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.226617 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.226629 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:54Z","lastTransitionTime":"2026-01-21T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.329371 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.329432 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.329450 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.329473 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.329491 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:54Z","lastTransitionTime":"2026-01-21T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.432546 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.433035 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.433133 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.433250 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.433358 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:54Z","lastTransitionTime":"2026-01-21T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.536311 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.536349 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.536362 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.536380 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.536392 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:54Z","lastTransitionTime":"2026-01-21T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.612886 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:54 crc kubenswrapper[4765]: E0121 13:03:54.613052 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.613136 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:54 crc kubenswrapper[4765]: E0121 13:03:54.613226 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.613637 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:54 crc kubenswrapper[4765]: E0121 13:03:54.613886 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.639413 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.639455 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.639464 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.639479 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.639489 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:54Z","lastTransitionTime":"2026-01-21T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.713624 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:31:14.034343086 +0000 UTC Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.742316 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.742362 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.742375 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.742394 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.742406 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:54Z","lastTransitionTime":"2026-01-21T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.848071 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.848122 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.848133 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.848154 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.848166 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:54Z","lastTransitionTime":"2026-01-21T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.951861 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.951949 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.952062 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.952118 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:54 crc kubenswrapper[4765]: I0121 13:03:54.952144 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:54Z","lastTransitionTime":"2026-01-21T13:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.055250 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.055304 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.055315 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.055334 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.055348 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:55Z","lastTransitionTime":"2026-01-21T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.157783 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.157847 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.157859 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.157876 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.157886 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:55Z","lastTransitionTime":"2026-01-21T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.260249 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.260301 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.260315 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.260334 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.260346 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:55Z","lastTransitionTime":"2026-01-21T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.362457 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.362509 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.362527 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.362550 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.362566 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:55Z","lastTransitionTime":"2026-01-21T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.465713 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.465778 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.465797 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.465823 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.465842 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:55Z","lastTransitionTime":"2026-01-21T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.568243 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.568395 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.568419 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.568449 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.568470 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:55Z","lastTransitionTime":"2026-01-21T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.613753 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:55 crc kubenswrapper[4765]: E0121 13:03:55.614715 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.615302 4765 scope.go:117] "RemoveContainer" containerID="3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.671743 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.671776 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.671787 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.671800 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.671810 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:55Z","lastTransitionTime":"2026-01-21T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.714805 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 16:55:20.442892204 +0000 UTC Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.774942 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.774979 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.774992 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.775008 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.775019 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:55Z","lastTransitionTime":"2026-01-21T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.878552 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.878604 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.878619 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.878641 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.878654 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:55Z","lastTransitionTime":"2026-01-21T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.981818 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.981865 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.981873 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.981894 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:55 crc kubenswrapper[4765]: I0121 13:03:55.981904 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:55Z","lastTransitionTime":"2026-01-21T13:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.085633 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.085675 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.085685 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.085705 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.085715 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:56Z","lastTransitionTime":"2026-01-21T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.189006 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.189074 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.189094 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.189120 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.189142 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:56Z","lastTransitionTime":"2026-01-21T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.293255 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.293326 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.293348 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.293381 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.293406 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:56Z","lastTransitionTime":"2026-01-21T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.397712 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.398258 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.398272 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.398292 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.398308 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:56Z","lastTransitionTime":"2026-01-21T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.501522 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.501580 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.501595 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.501618 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.501632 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:56Z","lastTransitionTime":"2026-01-21T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.603742 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.603789 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.603802 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.603820 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.603833 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:56Z","lastTransitionTime":"2026-01-21T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.613100 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:56 crc kubenswrapper[4765]: E0121 13:03:56.613257 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.613348 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:56 crc kubenswrapper[4765]: E0121 13:03:56.613435 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.613824 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:56 crc kubenswrapper[4765]: E0121 13:03:56.614179 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.706823 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.706879 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.706890 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.706909 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.706921 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:56Z","lastTransitionTime":"2026-01-21T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.715988 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:02:18.669311408 +0000 UTC Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.809581 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.809624 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.809636 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.809657 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.809673 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:56Z","lastTransitionTime":"2026-01-21T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.913566 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.913616 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.913629 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.913650 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:56 crc kubenswrapper[4765]: I0121 13:03:56.913663 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:56Z","lastTransitionTime":"2026-01-21T13:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.016769 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.016856 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.016877 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.016906 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.016924 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:57Z","lastTransitionTime":"2026-01-21T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.119888 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.119951 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.119966 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.119990 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.120004 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:57Z","lastTransitionTime":"2026-01-21T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.223316 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.223366 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.223380 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.223395 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.223409 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:57Z","lastTransitionTime":"2026-01-21T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.326276 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.326634 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.326711 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.326790 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.326872 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:57Z","lastTransitionTime":"2026-01-21T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.429511 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.429581 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.429594 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.429616 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.429628 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:57Z","lastTransitionTime":"2026-01-21T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.533275 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.533331 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.533342 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.533362 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.533373 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:57Z","lastTransitionTime":"2026-01-21T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.613648 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:57 crc kubenswrapper[4765]: E0121 13:03:57.613853 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.635362 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.635408 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.635418 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.635433 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.635447 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:57Z","lastTransitionTime":"2026-01-21T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.716705 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:23:32.748157859 +0000 UTC Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.738237 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.738297 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.738317 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.738338 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.738353 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:57Z","lastTransitionTime":"2026-01-21T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.841114 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.841176 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.841191 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.841237 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.841256 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:57Z","lastTransitionTime":"2026-01-21T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.944263 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.944360 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.944373 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.944390 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:57 crc kubenswrapper[4765]: I0121 13:03:57.944416 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:57Z","lastTransitionTime":"2026-01-21T13:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.047289 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.047334 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.047347 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.047364 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.047376 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:58Z","lastTransitionTime":"2026-01-21T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.150791 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.150819 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.150827 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.150843 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.150852 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:58Z","lastTransitionTime":"2026-01-21T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.283749 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.283788 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.283800 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.283816 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.283831 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:58Z","lastTransitionTime":"2026-01-21T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.386691 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.386729 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.386737 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.386753 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.386762 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:58Z","lastTransitionTime":"2026-01-21T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.489086 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.489125 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.489136 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.489154 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.489165 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:58Z","lastTransitionTime":"2026-01-21T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.592347 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.592391 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.592401 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.592415 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.592425 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:58Z","lastTransitionTime":"2026-01-21T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.613705 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.613765 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.613790 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:03:58 crc kubenswrapper[4765]: E0121 13:03:58.613952 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:03:58 crc kubenswrapper[4765]: E0121 13:03:58.614165 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:03:58 crc kubenswrapper[4765]: E0121 13:03:58.614248 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.695038 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.695085 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.695097 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.695114 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.695127 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:58Z","lastTransitionTime":"2026-01-21T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.717534 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:42:03.878960299 +0000 UTC Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.798242 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.798290 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.798302 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.798323 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.798338 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:58Z","lastTransitionTime":"2026-01-21T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.806911 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/2.log" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.811565 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerStarted","Data":"de49c802f91ed570b843dd8ca4ae6d4d198043461ef29509f6ae58e5cc55250a"} Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.812091 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.855858 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podStartSLOduration=73.855830219 podStartE2EDuration="1m13.855830219s" podCreationTimestamp="2026-01-21 13:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:03:58.855821398 +0000 UTC m=+99.873547220" watchObservedRunningTime="2026-01-21 13:03:58.855830219 +0000 UTC m=+99.873556041" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.897955 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.897933825 podStartE2EDuration="5.897933825s" podCreationTimestamp="2026-01-21 13:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:03:58.897136563 +0000 UTC m=+99.914862395" watchObservedRunningTime="2026-01-21 13:03:58.897933825 +0000 UTC m=+99.915659647" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.904657 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.904685 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.904693 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.904706 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.904716 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:58Z","lastTransitionTime":"2026-01-21T13:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.991990 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-z68f6" podStartSLOduration=73.991924813 podStartE2EDuration="1m13.991924813s" podCreationTimestamp="2026-01-21 13:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:03:58.971546499 +0000 UTC m=+99.989272321" watchObservedRunningTime="2026-01-21 13:03:58.991924813 +0000 UTC m=+100.009650635" Jan 21 13:03:58 crc kubenswrapper[4765]: I0121 13:03:58.992349 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=77.992345235 podStartE2EDuration="1m17.992345235s" podCreationTimestamp="2026-01-21 13:02:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:03:58.991918233 +0000 UTC m=+100.009644065" watchObservedRunningTime="2026-01-21 13:03:58.992345235 +0000 UTC m=+100.010071057" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.007860 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.007919 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.007932 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.007953 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.008064 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:59Z","lastTransitionTime":"2026-01-21T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.027074 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bplfq" podStartSLOduration=74.027050009 podStartE2EDuration="1m14.027050009s" podCreationTimestamp="2026-01-21 13:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:03:59.026389181 +0000 UTC m=+100.044115003" watchObservedRunningTime="2026-01-21 13:03:59.027050009 +0000 UTC m=+100.044775831" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.054527 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-gmkg6" podStartSLOduration=76.054499287 podStartE2EDuration="1m16.054499287s" podCreationTimestamp="2026-01-21 13:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:03:59.053920821 +0000 UTC m=+100.071646643" watchObservedRunningTime="2026-01-21 13:03:59.054499287 +0000 UTC m=+100.072225109" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.067899 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-w5x22" podStartSLOduration=76.067878941 podStartE2EDuration="1m16.067878941s" podCreationTimestamp="2026-01-21 13:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:03:59.067769168 +0000 UTC m=+100.085494990" watchObservedRunningTime="2026-01-21 13:03:59.067878941 +0000 UTC m=+100.085604763" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.086828 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podStartSLOduration=75.086801936 podStartE2EDuration="1m15.086801936s" podCreationTimestamp="2026-01-21 13:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:03:59.085050488 +0000 UTC m=+100.102776310" watchObservedRunningTime="2026-01-21 13:03:59.086801936 +0000 UTC m=+100.104527758" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.116359 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.116404 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.116413 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.116431 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.116448 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:59Z","lastTransitionTime":"2026-01-21T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.126856 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.126831495 podStartE2EDuration="45.126831495s" podCreationTimestamp="2026-01-21 13:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:03:59.124334127 +0000 UTC m=+100.142059949" watchObservedRunningTime="2026-01-21 13:03:59.126831495 +0000 UTC m=+100.144557317" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.140005 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=35.139975573 podStartE2EDuration="35.139975573s" podCreationTimestamp="2026-01-21 13:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:03:59.138661287 +0000 UTC m=+100.156387109" watchObservedRunningTime="2026-01-21 13:03:59.139975573 +0000 UTC m=+100.157701415" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.179852 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.179830228 podStartE2EDuration="1m19.179830228s" podCreationTimestamp="2026-01-21 13:02:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:03:59.163902875 +0000 UTC m=+100.181628707" watchObservedRunningTime="2026-01-21 13:03:59.179830228 +0000 UTC m=+100.197556040" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.210738 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvtm9" podStartSLOduration=73.210716139 podStartE2EDuration="1m13.210716139s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:03:59.20966562 +0000 UTC m=+100.227391442" watchObservedRunningTime="2026-01-21 13:03:59.210716139 +0000 UTC m=+100.228441961" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.219421 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.219460 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.219469 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.219485 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.219496 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:59Z","lastTransitionTime":"2026-01-21T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.322552 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.322912 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.323209 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.323332 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.323414 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:59Z","lastTransitionTime":"2026-01-21T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.426279 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.426348 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.426361 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.426384 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.426401 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:59Z","lastTransitionTime":"2026-01-21T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.529376 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.529464 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.529480 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.529511 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.529527 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:59Z","lastTransitionTime":"2026-01-21T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.614799 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:03:59 crc kubenswrapper[4765]: E0121 13:03:59.615035 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.631965 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.632007 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.632018 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.632034 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.632045 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:59Z","lastTransitionTime":"2026-01-21T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.718150 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:07:26.637976627 +0000 UTC Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.736027 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.736094 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.736111 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.736133 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.736146 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:59Z","lastTransitionTime":"2026-01-21T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.815978 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/3.log" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.817044 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/2.log" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.820266 4765 generic.go:334] "Generic (PLEG): container finished" podID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerID="de49c802f91ed570b843dd8ca4ae6d4d198043461ef29509f6ae58e5cc55250a" exitCode=1 Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.820321 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerDied","Data":"de49c802f91ed570b843dd8ca4ae6d4d198043461ef29509f6ae58e5cc55250a"} Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.820365 4765 scope.go:117] "RemoveContainer" containerID="3ed2d1febabd20ed788dae152ba09e38b56ad40d228861d7ca9659439f0845e1" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.822742 4765 scope.go:117] "RemoveContainer" containerID="de49c802f91ed570b843dd8ca4ae6d4d198043461ef29509f6ae58e5cc55250a" Jan 21 13:03:59 crc kubenswrapper[4765]: E0121 13:03:59.822965 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x677d_openshift-ovn-kubernetes(cd80c14d-ebec-4d65-8116-149400d6f8be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.840184 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.840670 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.840786 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.840980 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.841129 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:59Z","lastTransitionTime":"2026-01-21T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.945002 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.945052 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.945069 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.945088 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:03:59 crc kubenswrapper[4765]: I0121 13:03:59.945104 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:03:59Z","lastTransitionTime":"2026-01-21T13:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.048779 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.048840 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.048853 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.048875 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.048891 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:00Z","lastTransitionTime":"2026-01-21T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.153591 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.153977 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.153990 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.154007 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.154020 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:00Z","lastTransitionTime":"2026-01-21T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.256822 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.257188 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.257343 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.257465 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.257593 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:00Z","lastTransitionTime":"2026-01-21T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.361489 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.361569 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.361590 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.361616 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.361635 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:00Z","lastTransitionTime":"2026-01-21T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.464764 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.464871 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.464929 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.464964 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.464984 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:00Z","lastTransitionTime":"2026-01-21T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.567782 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.567829 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.567843 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.567863 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.567874 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:00Z","lastTransitionTime":"2026-01-21T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.613023 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:00 crc kubenswrapper[4765]: E0121 13:04:00.613164 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.613407 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:00 crc kubenswrapper[4765]: E0121 13:04:00.613463 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.613596 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:00 crc kubenswrapper[4765]: E0121 13:04:00.613644 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.671435 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.671479 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.671488 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.671505 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.671514 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:00Z","lastTransitionTime":"2026-01-21T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.718924 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:36:06.235664509 +0000 UTC Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.774287 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.774355 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.774365 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.774381 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.774402 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:00Z","lastTransitionTime":"2026-01-21T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.826763 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/3.log" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.832257 4765 scope.go:117] "RemoveContainer" containerID="de49c802f91ed570b843dd8ca4ae6d4d198043461ef29509f6ae58e5cc55250a" Jan 21 13:04:00 crc kubenswrapper[4765]: E0121 13:04:00.832437 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x677d_openshift-ovn-kubernetes(cd80c14d-ebec-4d65-8116-149400d6f8be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.877406 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.877459 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.877488 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.877510 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.877524 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:00Z","lastTransitionTime":"2026-01-21T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.980991 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.981078 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.981093 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.981130 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:00 crc kubenswrapper[4765]: I0121 13:04:00.981148 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:00Z","lastTransitionTime":"2026-01-21T13:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.084401 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.084485 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.084496 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.084514 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.084527 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:01Z","lastTransitionTime":"2026-01-21T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.187861 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.187918 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.187931 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.187950 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.187965 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:01Z","lastTransitionTime":"2026-01-21T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.291366 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.291451 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.291500 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.291533 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.291559 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:01Z","lastTransitionTime":"2026-01-21T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.395010 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.395072 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.395084 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.395133 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.395147 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:01Z","lastTransitionTime":"2026-01-21T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.498284 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.498381 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.498418 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.498455 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.498478 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:01Z","lastTransitionTime":"2026-01-21T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.601498 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.601553 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.601564 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.601584 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.601595 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:01Z","lastTransitionTime":"2026-01-21T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.612748 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:01 crc kubenswrapper[4765]: E0121 13:04:01.612899 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.705283 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.705411 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.705433 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.705462 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.705480 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:01Z","lastTransitionTime":"2026-01-21T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.719840 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:07:42.910607041 +0000 UTC Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.808879 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.808934 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.808948 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.808967 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.808979 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:01Z","lastTransitionTime":"2026-01-21T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.912035 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.912080 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.912095 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.912113 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:01 crc kubenswrapper[4765]: I0121 13:04:01.912123 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:01Z","lastTransitionTime":"2026-01-21T13:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.014900 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.014945 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.014954 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.014970 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.014979 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:02Z","lastTransitionTime":"2026-01-21T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.118410 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.118448 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.118459 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.118485 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.118497 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:02Z","lastTransitionTime":"2026-01-21T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.221903 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.221951 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.221962 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.221980 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.221992 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:02Z","lastTransitionTime":"2026-01-21T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.324772 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.324810 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.324819 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.324835 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.324846 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:02Z","lastTransitionTime":"2026-01-21T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.427196 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.427288 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.427305 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.427329 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.427346 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:02Z","lastTransitionTime":"2026-01-21T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.531257 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.531312 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.531324 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.531347 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.531360 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:02Z","lastTransitionTime":"2026-01-21T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.613053 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.613267 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.613300 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:02 crc kubenswrapper[4765]: E0121 13:04:02.613305 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:02 crc kubenswrapper[4765]: E0121 13:04:02.613534 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:02 crc kubenswrapper[4765]: E0121 13:04:02.613555 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.634629 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.634670 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.634681 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.634697 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.634710 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:02Z","lastTransitionTime":"2026-01-21T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.720302 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:03:34.507486979 +0000 UTC Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.739780 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.739847 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.739866 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.739891 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.739908 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:02Z","lastTransitionTime":"2026-01-21T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.842910 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.842968 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.842981 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.843001 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.843016 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:02Z","lastTransitionTime":"2026-01-21T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.945523 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.945604 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.945626 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.945655 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:02 crc kubenswrapper[4765]: I0121 13:04:02.945678 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:02Z","lastTransitionTime":"2026-01-21T13:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.049389 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.049469 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.049486 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.049513 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.049530 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:03Z","lastTransitionTime":"2026-01-21T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.153784 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.153848 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.153868 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.153901 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.153922 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:03Z","lastTransitionTime":"2026-01-21T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.255894 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.255949 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.255962 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.255982 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.255995 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:03Z","lastTransitionTime":"2026-01-21T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.358559 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.358653 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.358672 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.358696 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.358709 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:03Z","lastTransitionTime":"2026-01-21T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.461597 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.461667 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.461688 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.461712 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.461738 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:03Z","lastTransitionTime":"2026-01-21T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.564897 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.564968 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.564979 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.565001 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.565012 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:03Z","lastTransitionTime":"2026-01-21T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.613856 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:03 crc kubenswrapper[4765]: E0121 13:04:03.614055 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.667970 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.668022 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.668035 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.668053 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.668068 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:03Z","lastTransitionTime":"2026-01-21T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.693645 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.693688 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.693701 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.693720 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.693731 4765 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T13:04:03Z","lastTransitionTime":"2026-01-21T13:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.720616 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 14:29:08.502036943 +0000 UTC Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.720677 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.730644 4765 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.734272 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9"] Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.734696 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.736490 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.736847 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.736913 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.737300 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.842836 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f00f60df-991b-4951-9b4f-bb6f5b30afe5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ccsv9\" (UID: \"f00f60df-991b-4951-9b4f-bb6f5b30afe5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.842886 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f00f60df-991b-4951-9b4f-bb6f5b30afe5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ccsv9\" (UID: \"f00f60df-991b-4951-9b4f-bb6f5b30afe5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.842968 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f00f60df-991b-4951-9b4f-bb6f5b30afe5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ccsv9\" (UID: \"f00f60df-991b-4951-9b4f-bb6f5b30afe5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.843039 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f00f60df-991b-4951-9b4f-bb6f5b30afe5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ccsv9\" (UID: \"f00f60df-991b-4951-9b4f-bb6f5b30afe5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.843088 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f00f60df-991b-4951-9b4f-bb6f5b30afe5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ccsv9\" (UID: \"f00f60df-991b-4951-9b4f-bb6f5b30afe5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.944542 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f00f60df-991b-4951-9b4f-bb6f5b30afe5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ccsv9\" (UID: \"f00f60df-991b-4951-9b4f-bb6f5b30afe5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.944585 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f00f60df-991b-4951-9b4f-bb6f5b30afe5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ccsv9\" (UID: \"f00f60df-991b-4951-9b4f-bb6f5b30afe5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.944609 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f00f60df-991b-4951-9b4f-bb6f5b30afe5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ccsv9\" (UID: \"f00f60df-991b-4951-9b4f-bb6f5b30afe5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.944635 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f00f60df-991b-4951-9b4f-bb6f5b30afe5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ccsv9\" (UID: \"f00f60df-991b-4951-9b4f-bb6f5b30afe5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.944654 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f00f60df-991b-4951-9b4f-bb6f5b30afe5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ccsv9\" (UID: \"f00f60df-991b-4951-9b4f-bb6f5b30afe5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.944705 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f00f60df-991b-4951-9b4f-bb6f5b30afe5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ccsv9\" (UID: \"f00f60df-991b-4951-9b4f-bb6f5b30afe5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.944786 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f00f60df-991b-4951-9b4f-bb6f5b30afe5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ccsv9\" (UID: \"f00f60df-991b-4951-9b4f-bb6f5b30afe5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.946080 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f00f60df-991b-4951-9b4f-bb6f5b30afe5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ccsv9\" (UID: \"f00f60df-991b-4951-9b4f-bb6f5b30afe5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.954027 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f00f60df-991b-4951-9b4f-bb6f5b30afe5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ccsv9\" (UID: \"f00f60df-991b-4951-9b4f-bb6f5b30afe5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:03 crc kubenswrapper[4765]: I0121 13:04:03.960777 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f00f60df-991b-4951-9b4f-bb6f5b30afe5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ccsv9\" (UID: \"f00f60df-991b-4951-9b4f-bb6f5b30afe5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:04 crc kubenswrapper[4765]: I0121 13:04:04.054068 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" Jan 21 13:04:04 crc kubenswrapper[4765]: I0121 13:04:04.613485 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:04 crc kubenswrapper[4765]: E0121 13:04:04.613662 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:04 crc kubenswrapper[4765]: I0121 13:04:04.613741 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:04 crc kubenswrapper[4765]: E0121 13:04:04.613872 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:04 crc kubenswrapper[4765]: I0121 13:04:04.614000 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:04 crc kubenswrapper[4765]: E0121 13:04:04.614130 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:04 crc kubenswrapper[4765]: I0121 13:04:04.852245 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" event={"ID":"f00f60df-991b-4951-9b4f-bb6f5b30afe5","Type":"ContainerStarted","Data":"2912a3c1649c6d4c197f69ca467a7de9c4073c9b544b8d03ec51bc88fdc9ed7b"} Jan 21 13:04:04 crc kubenswrapper[4765]: I0121 13:04:04.852336 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" event={"ID":"f00f60df-991b-4951-9b4f-bb6f5b30afe5","Type":"ContainerStarted","Data":"0faa241c4b9ea32cc7a602e58711c215421c3083739f0c8b3cc4bfde6592a3ba"} Jan 21 13:04:04 crc kubenswrapper[4765]: I0121 13:04:04.868511 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ccsv9" podStartSLOduration=79.868492604 podStartE2EDuration="1m19.868492604s" podCreationTimestamp="2026-01-21 13:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:04:04.867469116 +0000 UTC m=+105.885194938" watchObservedRunningTime="2026-01-21 13:04:04.868492604 +0000 UTC m=+105.886218426" Jan 21 13:04:05 crc kubenswrapper[4765]: I0121 13:04:05.613496 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:05 crc kubenswrapper[4765]: E0121 13:04:05.613863 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:06 crc kubenswrapper[4765]: I0121 13:04:06.612995 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:06 crc kubenswrapper[4765]: I0121 13:04:06.612995 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:06 crc kubenswrapper[4765]: I0121 13:04:06.613014 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:06 crc kubenswrapper[4765]: E0121 13:04:06.614248 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:06 crc kubenswrapper[4765]: E0121 13:04:06.614272 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:06 crc kubenswrapper[4765]: E0121 13:04:06.614087 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:07 crc kubenswrapper[4765]: I0121 13:04:07.613086 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:07 crc kubenswrapper[4765]: E0121 13:04:07.613417 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:08 crc kubenswrapper[4765]: I0121 13:04:08.393495 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs\") pod \"network-metrics-daemon-4t7jw\" (UID: \"d8dea79f-de5c-4034-9742-c322b723a59c\") " pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:08 crc kubenswrapper[4765]: E0121 13:04:08.393734 4765 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:04:08 crc kubenswrapper[4765]: E0121 13:04:08.393838 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs podName:d8dea79f-de5c-4034-9742-c322b723a59c nodeName:}" failed. No retries permitted until 2026-01-21 13:05:12.393813893 +0000 UTC m=+173.411539715 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs") pod "network-metrics-daemon-4t7jw" (UID: "d8dea79f-de5c-4034-9742-c322b723a59c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 13:04:08 crc kubenswrapper[4765]: I0121 13:04:08.612882 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:08 crc kubenswrapper[4765]: I0121 13:04:08.612981 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:08 crc kubenswrapper[4765]: I0121 13:04:08.613010 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:08 crc kubenswrapper[4765]: E0121 13:04:08.613079 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:08 crc kubenswrapper[4765]: E0121 13:04:08.613155 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:08 crc kubenswrapper[4765]: E0121 13:04:08.613322 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:09 crc kubenswrapper[4765]: I0121 13:04:09.613121 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:09 crc kubenswrapper[4765]: E0121 13:04:09.614237 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:10 crc kubenswrapper[4765]: I0121 13:04:10.613246 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:10 crc kubenswrapper[4765]: I0121 13:04:10.613302 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:10 crc kubenswrapper[4765]: E0121 13:04:10.613382 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:10 crc kubenswrapper[4765]: I0121 13:04:10.613434 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:10 crc kubenswrapper[4765]: E0121 13:04:10.613588 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:10 crc kubenswrapper[4765]: E0121 13:04:10.613661 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:11 crc kubenswrapper[4765]: I0121 13:04:11.613686 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:11 crc kubenswrapper[4765]: E0121 13:04:11.613921 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:12 crc kubenswrapper[4765]: I0121 13:04:12.613452 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:12 crc kubenswrapper[4765]: E0121 13:04:12.613742 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:12 crc kubenswrapper[4765]: I0121 13:04:12.613783 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:12 crc kubenswrapper[4765]: I0121 13:04:12.614475 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:12 crc kubenswrapper[4765]: E0121 13:04:12.614611 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:12 crc kubenswrapper[4765]: E0121 13:04:12.614712 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:12 crc kubenswrapper[4765]: I0121 13:04:12.615159 4765 scope.go:117] "RemoveContainer" containerID="de49c802f91ed570b843dd8ca4ae6d4d198043461ef29509f6ae58e5cc55250a" Jan 21 13:04:12 crc kubenswrapper[4765]: E0121 13:04:12.615628 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x677d_openshift-ovn-kubernetes(cd80c14d-ebec-4d65-8116-149400d6f8be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" Jan 21 13:04:13 crc kubenswrapper[4765]: I0121 13:04:13.612753 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:13 crc kubenswrapper[4765]: E0121 13:04:13.613421 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:14 crc kubenswrapper[4765]: I0121 13:04:14.613309 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:14 crc kubenswrapper[4765]: E0121 13:04:14.613482 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:14 crc kubenswrapper[4765]: I0121 13:04:14.613543 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:14 crc kubenswrapper[4765]: I0121 13:04:14.613562 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:14 crc kubenswrapper[4765]: E0121 13:04:14.613616 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:14 crc kubenswrapper[4765]: E0121 13:04:14.613737 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:15 crc kubenswrapper[4765]: I0121 13:04:15.613568 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:15 crc kubenswrapper[4765]: E0121 13:04:15.613743 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:16 crc kubenswrapper[4765]: I0121 13:04:16.613093 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:16 crc kubenswrapper[4765]: I0121 13:04:16.613246 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:16 crc kubenswrapper[4765]: I0121 13:04:16.613280 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:16 crc kubenswrapper[4765]: E0121 13:04:16.613458 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:16 crc kubenswrapper[4765]: E0121 13:04:16.613584 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:16 crc kubenswrapper[4765]: E0121 13:04:16.613685 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:17 crc kubenswrapper[4765]: I0121 13:04:17.613609 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:17 crc kubenswrapper[4765]: E0121 13:04:17.613826 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:18 crc kubenswrapper[4765]: I0121 13:04:18.613765 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:18 crc kubenswrapper[4765]: I0121 13:04:18.613844 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:18 crc kubenswrapper[4765]: E0121 13:04:18.613951 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:18 crc kubenswrapper[4765]: E0121 13:04:18.614137 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:18 crc kubenswrapper[4765]: I0121 13:04:18.613764 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:18 crc kubenswrapper[4765]: E0121 13:04:18.614525 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:19 crc kubenswrapper[4765]: E0121 13:04:19.264737 4765 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 13:04:19 crc kubenswrapper[4765]: I0121 13:04:19.612715 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:19 crc kubenswrapper[4765]: E0121 13:04:19.614301 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:19 crc kubenswrapper[4765]: E0121 13:04:19.809566 4765 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 13:04:20 crc kubenswrapper[4765]: I0121 13:04:20.613382 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:20 crc kubenswrapper[4765]: I0121 13:04:20.613410 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:20 crc kubenswrapper[4765]: I0121 13:04:20.613538 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:20 crc kubenswrapper[4765]: E0121 13:04:20.613742 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:20 crc kubenswrapper[4765]: E0121 13:04:20.613844 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:20 crc kubenswrapper[4765]: E0121 13:04:20.613966 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:21 crc kubenswrapper[4765]: I0121 13:04:21.612705 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:21 crc kubenswrapper[4765]: E0121 13:04:21.612903 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:22 crc kubenswrapper[4765]: I0121 13:04:22.612826 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:22 crc kubenswrapper[4765]: I0121 13:04:22.612822 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:22 crc kubenswrapper[4765]: E0121 13:04:22.613000 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:22 crc kubenswrapper[4765]: E0121 13:04:22.613038 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:22 crc kubenswrapper[4765]: I0121 13:04:22.613349 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:22 crc kubenswrapper[4765]: E0121 13:04:22.613432 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:23 crc kubenswrapper[4765]: I0121 13:04:23.613567 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:23 crc kubenswrapper[4765]: E0121 13:04:23.613880 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:24 crc kubenswrapper[4765]: I0121 13:04:24.613692 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:24 crc kubenswrapper[4765]: E0121 13:04:24.614447 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:24 crc kubenswrapper[4765]: I0121 13:04:24.613753 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:24 crc kubenswrapper[4765]: E0121 13:04:24.614611 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:24 crc kubenswrapper[4765]: I0121 13:04:24.613925 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:24 crc kubenswrapper[4765]: E0121 13:04:24.614735 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:24 crc kubenswrapper[4765]: E0121 13:04:24.810384 4765 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 13:04:25 crc kubenswrapper[4765]: I0121 13:04:25.613588 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:25 crc kubenswrapper[4765]: E0121 13:04:25.613772 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:26 crc kubenswrapper[4765]: I0121 13:04:26.612853 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:26 crc kubenswrapper[4765]: I0121 13:04:26.612853 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:26 crc kubenswrapper[4765]: E0121 13:04:26.613028 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:26 crc kubenswrapper[4765]: E0121 13:04:26.613077 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:26 crc kubenswrapper[4765]: I0121 13:04:26.612879 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:26 crc kubenswrapper[4765]: E0121 13:04:26.613283 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:27 crc kubenswrapper[4765]: I0121 13:04:27.613391 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:27 crc kubenswrapper[4765]: E0121 13:04:27.613536 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:27 crc kubenswrapper[4765]: I0121 13:04:27.614723 4765 scope.go:117] "RemoveContainer" containerID="de49c802f91ed570b843dd8ca4ae6d4d198043461ef29509f6ae58e5cc55250a" Jan 21 13:04:27 crc kubenswrapper[4765]: E0121 13:04:27.615127 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x677d_openshift-ovn-kubernetes(cd80c14d-ebec-4d65-8116-149400d6f8be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" Jan 21 13:04:28 crc kubenswrapper[4765]: I0121 13:04:28.613351 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:28 crc kubenswrapper[4765]: I0121 13:04:28.613406 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:28 crc kubenswrapper[4765]: E0121 13:04:28.613531 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:28 crc kubenswrapper[4765]: E0121 13:04:28.613710 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:28 crc kubenswrapper[4765]: I0121 13:04:28.613729 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:28 crc kubenswrapper[4765]: E0121 13:04:28.613847 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:29 crc kubenswrapper[4765]: I0121 13:04:29.613246 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:29 crc kubenswrapper[4765]: E0121 13:04:29.615441 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:29 crc kubenswrapper[4765]: E0121 13:04:29.811026 4765 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 13:04:29 crc kubenswrapper[4765]: I0121 13:04:29.938268 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bplfq_d9b9a5be-6b15-46d2-8715-506efdae8ae7/kube-multus/1.log" Jan 21 13:04:29 crc kubenswrapper[4765]: I0121 13:04:29.938953 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bplfq_d9b9a5be-6b15-46d2-8715-506efdae8ae7/kube-multus/0.log" Jan 21 13:04:29 crc kubenswrapper[4765]: I0121 13:04:29.939000 4765 generic.go:334] "Generic (PLEG): container finished" podID="d9b9a5be-6b15-46d2-8715-506efdae8ae7" containerID="79123ef5ce55b0a6e560030a8178ca3e5f52456eca3c33dc0598e5612c71fa3f" exitCode=1 Jan 21 13:04:29 crc kubenswrapper[4765]: I0121 13:04:29.939046 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bplfq" event={"ID":"d9b9a5be-6b15-46d2-8715-506efdae8ae7","Type":"ContainerDied","Data":"79123ef5ce55b0a6e560030a8178ca3e5f52456eca3c33dc0598e5612c71fa3f"} Jan 21 13:04:29 crc kubenswrapper[4765]: I0121 13:04:29.939101 4765 scope.go:117] "RemoveContainer" containerID="9531cdc0329ac02a84970395bf92267342ea6ab68a77656cdc105dd3e2bf3cce" Jan 21 13:04:29 crc kubenswrapper[4765]: I0121 13:04:29.939614 4765 scope.go:117] "RemoveContainer" containerID="79123ef5ce55b0a6e560030a8178ca3e5f52456eca3c33dc0598e5612c71fa3f" Jan 21 13:04:29 crc kubenswrapper[4765]: E0121 13:04:29.939952 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-bplfq_openshift-multus(d9b9a5be-6b15-46d2-8715-506efdae8ae7)\"" pod="openshift-multus/multus-bplfq" podUID="d9b9a5be-6b15-46d2-8715-506efdae8ae7" Jan 21 13:04:30 crc kubenswrapper[4765]: I0121 13:04:30.613396 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:30 crc kubenswrapper[4765]: I0121 13:04:30.613396 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:30 crc kubenswrapper[4765]: E0121 13:04:30.613562 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:30 crc kubenswrapper[4765]: E0121 13:04:30.613628 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:30 crc kubenswrapper[4765]: I0121 13:04:30.614371 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:30 crc kubenswrapper[4765]: E0121 13:04:30.614745 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:30 crc kubenswrapper[4765]: I0121 13:04:30.945026 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bplfq_d9b9a5be-6b15-46d2-8715-506efdae8ae7/kube-multus/1.log" Jan 21 13:04:31 crc kubenswrapper[4765]: I0121 13:04:31.612780 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:31 crc kubenswrapper[4765]: E0121 13:04:31.612959 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:32 crc kubenswrapper[4765]: I0121 13:04:32.612606 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:32 crc kubenswrapper[4765]: I0121 13:04:32.612604 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:32 crc kubenswrapper[4765]: E0121 13:04:32.613279 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:32 crc kubenswrapper[4765]: I0121 13:04:32.612635 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:32 crc kubenswrapper[4765]: E0121 13:04:32.613465 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:32 crc kubenswrapper[4765]: E0121 13:04:32.613587 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:33 crc kubenswrapper[4765]: I0121 13:04:33.613617 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:33 crc kubenswrapper[4765]: E0121 13:04:33.613830 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:34 crc kubenswrapper[4765]: I0121 13:04:34.613673 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:34 crc kubenswrapper[4765]: I0121 13:04:34.613708 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:34 crc kubenswrapper[4765]: I0121 13:04:34.613739 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:34 crc kubenswrapper[4765]: E0121 13:04:34.613838 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:34 crc kubenswrapper[4765]: E0121 13:04:34.614097 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:34 crc kubenswrapper[4765]: E0121 13:04:34.614285 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:34 crc kubenswrapper[4765]: E0121 13:04:34.812077 4765 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 13:04:35 crc kubenswrapper[4765]: I0121 13:04:35.613557 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:35 crc kubenswrapper[4765]: E0121 13:04:35.614140 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:36 crc kubenswrapper[4765]: I0121 13:04:36.613065 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:36 crc kubenswrapper[4765]: I0121 13:04:36.613136 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:36 crc kubenswrapper[4765]: E0121 13:04:36.613310 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:36 crc kubenswrapper[4765]: E0121 13:04:36.613366 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:36 crc kubenswrapper[4765]: I0121 13:04:36.613727 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:36 crc kubenswrapper[4765]: E0121 13:04:36.613969 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:37 crc kubenswrapper[4765]: I0121 13:04:37.613324 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:37 crc kubenswrapper[4765]: E0121 13:04:37.614074 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:38 crc kubenswrapper[4765]: I0121 13:04:38.612865 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:38 crc kubenswrapper[4765]: E0121 13:04:38.612999 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:38 crc kubenswrapper[4765]: I0121 13:04:38.612883 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:38 crc kubenswrapper[4765]: E0121 13:04:38.613222 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:38 crc kubenswrapper[4765]: I0121 13:04:38.613897 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:38 crc kubenswrapper[4765]: E0121 13:04:38.614101 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:39 crc kubenswrapper[4765]: I0121 13:04:39.613060 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:39 crc kubenswrapper[4765]: E0121 13:04:39.614396 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:39 crc kubenswrapper[4765]: E0121 13:04:39.816754 4765 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 13:04:40 crc kubenswrapper[4765]: I0121 13:04:40.613643 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:40 crc kubenswrapper[4765]: I0121 13:04:40.613709 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:40 crc kubenswrapper[4765]: I0121 13:04:40.613774 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:40 crc kubenswrapper[4765]: E0121 13:04:40.613858 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:40 crc kubenswrapper[4765]: E0121 13:04:40.614182 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:40 crc kubenswrapper[4765]: E0121 13:04:40.614300 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:40 crc kubenswrapper[4765]: I0121 13:04:40.614534 4765 scope.go:117] "RemoveContainer" containerID="de49c802f91ed570b843dd8ca4ae6d4d198043461ef29509f6ae58e5cc55250a" Jan 21 13:04:40 crc kubenswrapper[4765]: I0121 13:04:40.980138 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/3.log" Jan 21 13:04:40 crc kubenswrapper[4765]: I0121 13:04:40.982202 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerStarted","Data":"73e26c6ffdf8a354d3a45016f806d59bea6134a67cd8caa6a234ab33001ac041"} Jan 21 13:04:40 crc kubenswrapper[4765]: I0121 13:04:40.983150 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:04:41 crc kubenswrapper[4765]: I0121 13:04:41.452415 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4t7jw"] Jan 21 13:04:41 crc kubenswrapper[4765]: I0121 13:04:41.452543 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:41 crc kubenswrapper[4765]: E0121 13:04:41.452652 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:41 crc kubenswrapper[4765]: I0121 13:04:41.613489 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:41 crc kubenswrapper[4765]: E0121 13:04:41.613698 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:42 crc kubenswrapper[4765]: I0121 13:04:42.613542 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:42 crc kubenswrapper[4765]: I0121 13:04:42.613609 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:42 crc kubenswrapper[4765]: E0121 13:04:42.615279 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:42 crc kubenswrapper[4765]: E0121 13:04:42.615293 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:43 crc kubenswrapper[4765]: I0121 13:04:43.613454 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:43 crc kubenswrapper[4765]: E0121 13:04:43.613814 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:43 crc kubenswrapper[4765]: I0121 13:04:43.613520 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:43 crc kubenswrapper[4765]: E0121 13:04:43.614009 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:43 crc kubenswrapper[4765]: I0121 13:04:43.614142 4765 scope.go:117] "RemoveContainer" containerID="79123ef5ce55b0a6e560030a8178ca3e5f52456eca3c33dc0598e5612c71fa3f" Jan 21 13:04:43 crc kubenswrapper[4765]: I0121 13:04:43.995118 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bplfq_d9b9a5be-6b15-46d2-8715-506efdae8ae7/kube-multus/1.log" Jan 21 13:04:43 crc kubenswrapper[4765]: I0121 13:04:43.995184 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bplfq" event={"ID":"d9b9a5be-6b15-46d2-8715-506efdae8ae7","Type":"ContainerStarted","Data":"1ae915ebd49fe934c46ddf83c4203b9e4892daa00e041b4eb261c093882f696f"} Jan 21 13:04:44 crc kubenswrapper[4765]: I0121 13:04:44.612911 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:44 crc kubenswrapper[4765]: I0121 13:04:44.613003 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:44 crc kubenswrapper[4765]: E0121 13:04:44.613201 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:44 crc kubenswrapper[4765]: E0121 13:04:44.613318 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:44 crc kubenswrapper[4765]: E0121 13:04:44.818653 4765 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 13:04:45 crc kubenswrapper[4765]: I0121 13:04:45.612874 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:45 crc kubenswrapper[4765]: I0121 13:04:45.612903 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:45 crc kubenswrapper[4765]: E0121 13:04:45.613555 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:45 crc kubenswrapper[4765]: E0121 13:04:45.613643 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:46 crc kubenswrapper[4765]: I0121 13:04:46.613073 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:46 crc kubenswrapper[4765]: I0121 13:04:46.613141 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:46 crc kubenswrapper[4765]: E0121 13:04:46.613315 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:46 crc kubenswrapper[4765]: E0121 13:04:46.613462 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:47 crc kubenswrapper[4765]: I0121 13:04:47.613297 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:47 crc kubenswrapper[4765]: I0121 13:04:47.613317 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:47 crc kubenswrapper[4765]: E0121 13:04:47.613537 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:47 crc kubenswrapper[4765]: E0121 13:04:47.613656 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:48 crc kubenswrapper[4765]: I0121 13:04:48.613035 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:48 crc kubenswrapper[4765]: I0121 13:04:48.613058 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:48 crc kubenswrapper[4765]: E0121 13:04:48.613369 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 13:04:48 crc kubenswrapper[4765]: E0121 13:04:48.613618 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:04:49 crc kubenswrapper[4765]: I0121 13:04:49.567730 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:49 crc kubenswrapper[4765]: I0121 13:04:49.567985 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:49 crc kubenswrapper[4765]: I0121 13:04:49.568057 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:49 crc kubenswrapper[4765]: I0121 13:04:49.568094 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:49 crc kubenswrapper[4765]: I0121 13:04:49.568133 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:49 crc kubenswrapper[4765]: E0121 13:04:49.568359 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:04:49 crc kubenswrapper[4765]: E0121 13:04:49.568388 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:04:49 crc kubenswrapper[4765]: E0121 13:04:49.568408 4765 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:04:49 crc kubenswrapper[4765]: E0121 13:04:49.568484 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 13:06:51.568460139 +0000 UTC m=+272.586185991 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:04:49 crc kubenswrapper[4765]: E0121 13:04:49.568834 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:06:51.568813939 +0000 UTC m=+272.586539791 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:49 crc kubenswrapper[4765]: E0121 13:04:49.568900 4765 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:04:49 crc kubenswrapper[4765]: E0121 13:04:49.568996 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:06:51.568976143 +0000 UTC m=+272.586701995 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 13:04:49 crc kubenswrapper[4765]: E0121 13:04:49.569350 4765 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:04:49 crc kubenswrapper[4765]: E0121 13:04:49.569423 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 13:06:51.569402525 +0000 UTC m=+272.587128387 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 13:04:49 crc kubenswrapper[4765]: E0121 13:04:49.569720 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 13:04:49 crc kubenswrapper[4765]: E0121 13:04:49.569748 4765 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 13:04:49 crc kubenswrapper[4765]: E0121 13:04:49.569767 4765 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:04:49 crc kubenswrapper[4765]: E0121 13:04:49.569855 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 13:06:51.569804716 +0000 UTC m=+272.587530578 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 13:04:49 crc kubenswrapper[4765]: I0121 13:04:49.615717 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:49 crc kubenswrapper[4765]: E0121 13:04:49.615938 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t7jw" podUID="d8dea79f-de5c-4034-9742-c322b723a59c" Jan 21 13:04:49 crc kubenswrapper[4765]: I0121 13:04:49.616333 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:49 crc kubenswrapper[4765]: E0121 13:04:49.616459 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 13:04:50 crc kubenswrapper[4765]: I0121 13:04:50.612665 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:04:50 crc kubenswrapper[4765]: I0121 13:04:50.612838 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:04:50 crc kubenswrapper[4765]: I0121 13:04:50.614914 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 13:04:50 crc kubenswrapper[4765]: I0121 13:04:50.616104 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 13:04:51 crc kubenswrapper[4765]: I0121 13:04:51.612872 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:04:51 crc kubenswrapper[4765]: I0121 13:04:51.614072 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:04:51 crc kubenswrapper[4765]: I0121 13:04:51.618287 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 13:04:51 crc kubenswrapper[4765]: I0121 13:04:51.620444 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 13:04:51 crc kubenswrapper[4765]: I0121 13:04:51.620929 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 13:04:51 crc kubenswrapper[4765]: I0121 13:04:51.621557 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 13:04:53 crc kubenswrapper[4765]: I0121 13:04:53.788043 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.374617 4765 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.425021 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhjpc"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.425751 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.426594 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hg5vm"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.427429 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: W0121 13:04:54.431578 4765 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 21 13:04:54 crc kubenswrapper[4765]: E0121 13:04:54.431638 4765 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.432525 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.435623 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.435935 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.442613 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.444659 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.445290 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.445527 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.446166 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.446502 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.446717 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.448679 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.448896 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.449770 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.456125 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.461602 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.464677 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.476307 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.476344 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mnwzz"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.482041 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.486145 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.487283 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.495297 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.496101 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.502181 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.503493 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.504163 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-25rmn"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.504755 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.513029 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.517785 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.518161 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.518386 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.518571 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.518720 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.518765 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.518872 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.519023 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.519197 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.519362 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.519546 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.519694 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.519825 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.519971 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.520156 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.520705 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.521483 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.521507 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.521565 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.521839 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.523167 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.526501 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.526367 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xfs5k"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.527820 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.528720 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-audit-dir\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.528763 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-image-import-ca\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.528795 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-config\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.528828 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk2v4\" (UniqueName: \"kubernetes.io/projected/fc58cdb9-8e5c-426c-a193-994e3b2ce117-kube-api-access-tk2v4\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.528882 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.528907 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-config\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.528929 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-node-pullsecrets\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.528954 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25qjz\" (UniqueName: \"kubernetes.io/projected/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-kube-api-access-25qjz\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.528979 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-client-ca\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.529005 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-etcd-serving-ca\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.529034 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.529057 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc58cdb9-8e5c-426c-a193-994e3b2ce117-serving-cert\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.529079 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-encryption-config\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.529102 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-etcd-client\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.529125 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-serving-cert\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.529147 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-audit\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.537005 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.537425 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.540278 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.541316 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.541421 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.547162 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.547576 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.553901 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.555096 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.558413 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.558574 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.558758 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.558883 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.558983 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.559083 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.560830 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fq5sj"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.565962 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5zz49"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.566487 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.566884 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fq5sj" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.575465 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.577320 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.577354 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.578015 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.579084 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.594484 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-l7658"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.594770 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.595288 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.595512 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.595538 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.595592 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.595656 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.595792 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.600274 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.603110 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.603460 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.611345 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.616114 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.616767 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.619020 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.621268 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5cllr"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.622300 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5cllr" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630129 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d572b4ba-2f55-43ef-8b71-af94f9519768-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630179 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w98tv\" (UniqueName: \"kubernetes.io/projected/b0a0a1c1-7631-4b40-8a54-268af3d95cb6-kube-api-access-w98tv\") pod \"openshift-config-operator-7777fb866f-kn5fp\" (UID: \"b0a0a1c1-7631-4b40-8a54-268af3d95cb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630220 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630249 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2vb6\" (UniqueName: \"kubernetes.io/projected/f1c56430-35ae-4e7c-9f5a-108205dbe2b3-kube-api-access-c2vb6\") pod \"console-operator-58897d9998-5zz49\" (UID: \"f1c56430-35ae-4e7c-9f5a-108205dbe2b3\") " pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630269 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d572b4ba-2f55-43ef-8b71-af94f9519768-encryption-config\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630292 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1c56430-35ae-4e7c-9f5a-108205dbe2b3-trusted-ca\") pod \"console-operator-58897d9998-5zz49\" (UID: \"f1c56430-35ae-4e7c-9f5a-108205dbe2b3\") " pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630315 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d572b4ba-2f55-43ef-8b71-af94f9519768-etcd-client\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630336 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58192d0b-35de-4d58-8037-559360392628-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-w6s52\" (UID: \"58192d0b-35de-4d58-8037-559360392628\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630357 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238886b4-14ad-4a1c-8ba4-84b652601186-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-25rmn\" (UID: \"238886b4-14ad-4a1c-8ba4-84b652601186\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630380 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-audit-policies\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630406 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4zb\" (UniqueName: \"kubernetes.io/projected/48998ce4-56d3-439e-90c5-c7caa4b8344f-kube-api-access-gg4zb\") pod \"cluster-samples-operator-665b6dd947-fq5sj\" (UID: \"48998ce4-56d3-439e-90c5-c7caa4b8344f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fq5sj" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630428 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d572b4ba-2f55-43ef-8b71-af94f9519768-audit-dir\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630452 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630473 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d572b4ba-2f55-43ef-8b71-af94f9519768-audit-policies\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630499 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630521 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238886b4-14ad-4a1c-8ba4-84b652601186-service-ca-bundle\") pod \"authentication-operator-69f744f599-25rmn\" (UID: \"238886b4-14ad-4a1c-8ba4-84b652601186\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630558 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c56430-35ae-4e7c-9f5a-108205dbe2b3-config\") pod \"console-operator-58897d9998-5zz49\" (UID: \"f1c56430-35ae-4e7c-9f5a-108205dbe2b3\") " pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630586 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-audit-dir\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630612 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630639 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk7g5\" (UniqueName: \"kubernetes.io/projected/6525c86b-8810-4639-8d16-93d25fac15a9-kube-api-access-nk7g5\") pod \"route-controller-manager-6576b87f9c-t8tsq\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630664 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcf77\" (UniqueName: \"kubernetes.io/projected/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-kube-api-access-mcf77\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630686 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47mlv\" (UniqueName: \"kubernetes.io/projected/d572b4ba-2f55-43ef-8b71-af94f9519768-kube-api-access-47mlv\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630712 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-image-import-ca\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630738 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-config\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630767 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35257f3-6d8a-4917-a956-3b71a0e54c23-config\") pod \"machine-api-operator-5694c8668f-mnwzz\" (UID: \"c35257f3-6d8a-4917-a956-3b71a0e54c23\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630790 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9xk6\" (UniqueName: \"kubernetes.io/projected/238886b4-14ad-4a1c-8ba4-84b652601186-kube-api-access-v9xk6\") pod \"authentication-operator-69f744f599-25rmn\" (UID: \"238886b4-14ad-4a1c-8ba4-84b652601186\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630856 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk2v4\" (UniqueName: \"kubernetes.io/projected/fc58cdb9-8e5c-426c-a193-994e3b2ce117-kube-api-access-tk2v4\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630885 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630906 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f85054-6343-454e-9f9f-eebadd266b08-config\") pod \"machine-approver-56656f9798-92hfr\" (UID: \"67f85054-6343-454e-9f9f-eebadd266b08\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630931 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/67f85054-6343-454e-9f9f-eebadd266b08-machine-approver-tls\") pod \"machine-approver-56656f9798-92hfr\" (UID: \"67f85054-6343-454e-9f9f-eebadd266b08\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630956 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-node-pullsecrets\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.630980 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-config\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.631001 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.631023 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d572b4ba-2f55-43ef-8b71-af94f9519768-serving-cert\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.631079 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/238886b4-14ad-4a1c-8ba4-84b652601186-serving-cert\") pod \"authentication-operator-69f744f599-25rmn\" (UID: \"238886b4-14ad-4a1c-8ba4-84b652601186\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.631107 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b0a0a1c1-7631-4b40-8a54-268af3d95cb6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-kn5fp\" (UID: \"b0a0a1c1-7631-4b40-8a54-268af3d95cb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.631132 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6525c86b-8810-4639-8d16-93d25fac15a9-client-ca\") pod \"route-controller-manager-6576b87f9c-t8tsq\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.631152 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25qjz\" (UniqueName: \"kubernetes.io/projected/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-kube-api-access-25qjz\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.631177 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw8cs\" (UniqueName: \"kubernetes.io/projected/c35257f3-6d8a-4917-a956-3b71a0e54c23-kube-api-access-bw8cs\") pod \"machine-api-operator-5694c8668f-mnwzz\" (UID: \"c35257f3-6d8a-4917-a956-3b71a0e54c23\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.631201 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d572b4ba-2f55-43ef-8b71-af94f9519768-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.631437 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.631750 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.631907 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.632053 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.632245 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.632418 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.632524 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.632712 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.632836 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.633231 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.633242 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.633461 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.633606 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.633733 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.633910 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.634069 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.634847 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635484 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-client-ca\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635533 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635574 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-etcd-serving-ca\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635598 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgf8j\" (UniqueName: \"kubernetes.io/projected/67f85054-6343-454e-9f9f-eebadd266b08-kube-api-access-dgf8j\") pod \"machine-approver-56656f9798-92hfr\" (UID: \"67f85054-6343-454e-9f9f-eebadd266b08\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635623 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635655 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c35257f3-6d8a-4917-a956-3b71a0e54c23-images\") pod \"machine-api-operator-5694c8668f-mnwzz\" (UID: \"c35257f3-6d8a-4917-a956-3b71a0e54c23\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635676 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635697 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-audit-dir\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635718 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635749 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96kkw\" (UniqueName: \"kubernetes.io/projected/58192d0b-35de-4d58-8037-559360392628-kube-api-access-96kkw\") pod \"openshift-apiserver-operator-796bbdcf4f-w6s52\" (UID: \"58192d0b-35de-4d58-8037-559360392628\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635770 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/48998ce4-56d3-439e-90c5-c7caa4b8344f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fq5sj\" (UID: \"48998ce4-56d3-439e-90c5-c7caa4b8344f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fq5sj" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635795 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc58cdb9-8e5c-426c-a193-994e3b2ce117-serving-cert\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635820 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635851 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c56430-35ae-4e7c-9f5a-108205dbe2b3-serving-cert\") pod \"console-operator-58897d9998-5zz49\" (UID: \"f1c56430-35ae-4e7c-9f5a-108205dbe2b3\") " pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635877 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-encryption-config\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635898 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-etcd-client\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635919 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/238886b4-14ad-4a1c-8ba4-84b652601186-config\") pod \"authentication-operator-69f744f599-25rmn\" (UID: \"238886b4-14ad-4a1c-8ba4-84b652601186\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635947 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635969 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0a0a1c1-7631-4b40-8a54-268af3d95cb6-serving-cert\") pod \"openshift-config-operator-7777fb866f-kn5fp\" (UID: \"b0a0a1c1-7631-4b40-8a54-268af3d95cb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.635989 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-serving-cert\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.636010 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6525c86b-8810-4639-8d16-93d25fac15a9-config\") pod \"route-controller-manager-6576b87f9c-t8tsq\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.636032 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6525c86b-8810-4639-8d16-93d25fac15a9-serving-cert\") pod \"route-controller-manager-6576b87f9c-t8tsq\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.636055 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58192d0b-35de-4d58-8037-559360392628-config\") pod \"openshift-apiserver-operator-796bbdcf4f-w6s52\" (UID: \"58192d0b-35de-4d58-8037-559360392628\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.636078 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.636095 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c35257f3-6d8a-4917-a956-3b71a0e54c23-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mnwzz\" (UID: \"c35257f3-6d8a-4917-a956-3b71a0e54c23\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.636121 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-audit\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.636160 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67f85054-6343-454e-9f9f-eebadd266b08-auth-proxy-config\") pod \"machine-approver-56656f9798-92hfr\" (UID: \"67f85054-6343-454e-9f9f-eebadd266b08\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.637277 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-audit-dir\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.637440 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.637568 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.642427 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-image-import-ca\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.643023 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.643800 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-config\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.643886 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hg5vm"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.668563 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.668793 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.668829 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.669248 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.669464 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.669564 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mnwzz"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.670052 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc58cdb9-8e5c-426c-a193-994e3b2ce117-serving-cert\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.670682 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-audit\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.670834 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.670949 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2x4pn"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.671842 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-node-pullsecrets\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.672455 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-etcd-serving-ca\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.672700 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nkzc2"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.674173 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-config\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.674355 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.675392 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.675746 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.675913 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.677755 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.683374 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.683881 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.689291 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.690396 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.690645 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.690791 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.690940 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.691156 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.696984 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.697960 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.698628 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.709159 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-serving-cert\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.710011 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-encryption-config\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.710200 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-etcd-client\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.710305 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.714355 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhjpc"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.717847 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.727258 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk2v4\" (UniqueName: \"kubernetes.io/projected/fc58cdb9-8e5c-426c-a193-994e3b2ce117-kube-api-access-tk2v4\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.729408 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4zqn6"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.730376 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4zqn6" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.730487 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.732807 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.733628 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.735162 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25qjz\" (UniqueName: \"kubernetes.io/projected/7cb01aaa-41aa-442c-b18c-d345abd3d3d9-kube-api-access-25qjz\") pod \"apiserver-76f77b778f-hg5vm\" (UID: \"7cb01aaa-41aa-442c-b18c-d345abd3d3d9\") " pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.741812 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d572b4ba-2f55-43ef-8b71-af94f9519768-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.741861 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w98tv\" (UniqueName: \"kubernetes.io/projected/b0a0a1c1-7631-4b40-8a54-268af3d95cb6-kube-api-access-w98tv\") pod \"openshift-config-operator-7777fb866f-kn5fp\" (UID: \"b0a0a1c1-7631-4b40-8a54-268af3d95cb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.741901 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.741919 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2vb6\" (UniqueName: \"kubernetes.io/projected/f1c56430-35ae-4e7c-9f5a-108205dbe2b3-kube-api-access-c2vb6\") pod \"console-operator-58897d9998-5zz49\" (UID: \"f1c56430-35ae-4e7c-9f5a-108205dbe2b3\") " pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.741937 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d572b4ba-2f55-43ef-8b71-af94f9519768-encryption-config\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.741953 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1c56430-35ae-4e7c-9f5a-108205dbe2b3-trusted-ca\") pod \"console-operator-58897d9998-5zz49\" (UID: \"f1c56430-35ae-4e7c-9f5a-108205dbe2b3\") " pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.741968 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d572b4ba-2f55-43ef-8b71-af94f9519768-etcd-client\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.741984 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58192d0b-35de-4d58-8037-559360392628-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-w6s52\" (UID: \"58192d0b-35de-4d58-8037-559360392628\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742003 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238886b4-14ad-4a1c-8ba4-84b652601186-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-25rmn\" (UID: \"238886b4-14ad-4a1c-8ba4-84b652601186\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742025 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-audit-policies\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742045 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg4zb\" (UniqueName: \"kubernetes.io/projected/48998ce4-56d3-439e-90c5-c7caa4b8344f-kube-api-access-gg4zb\") pod \"cluster-samples-operator-665b6dd947-fq5sj\" (UID: \"48998ce4-56d3-439e-90c5-c7caa4b8344f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fq5sj" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742061 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d572b4ba-2f55-43ef-8b71-af94f9519768-audit-dir\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742087 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742105 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742125 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d572b4ba-2f55-43ef-8b71-af94f9519768-audit-policies\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742151 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c56430-35ae-4e7c-9f5a-108205dbe2b3-config\") pod \"console-operator-58897d9998-5zz49\" (UID: \"f1c56430-35ae-4e7c-9f5a-108205dbe2b3\") " pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742378 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238886b4-14ad-4a1c-8ba4-84b652601186-service-ca-bundle\") pod \"authentication-operator-69f744f599-25rmn\" (UID: \"238886b4-14ad-4a1c-8ba4-84b652601186\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742405 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742427 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk7g5\" (UniqueName: \"kubernetes.io/projected/6525c86b-8810-4639-8d16-93d25fac15a9-kube-api-access-nk7g5\") pod \"route-controller-manager-6576b87f9c-t8tsq\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742444 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcf77\" (UniqueName: \"kubernetes.io/projected/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-kube-api-access-mcf77\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742464 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47mlv\" (UniqueName: \"kubernetes.io/projected/d572b4ba-2f55-43ef-8b71-af94f9519768-kube-api-access-47mlv\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742482 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9xk6\" (UniqueName: \"kubernetes.io/projected/238886b4-14ad-4a1c-8ba4-84b652601186-kube-api-access-v9xk6\") pod \"authentication-operator-69f744f599-25rmn\" (UID: \"238886b4-14ad-4a1c-8ba4-84b652601186\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742498 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35257f3-6d8a-4917-a956-3b71a0e54c23-config\") pod \"machine-api-operator-5694c8668f-mnwzz\" (UID: \"c35257f3-6d8a-4917-a956-3b71a0e54c23\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742544 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f85054-6343-454e-9f9f-eebadd266b08-config\") pod \"machine-approver-56656f9798-92hfr\" (UID: \"67f85054-6343-454e-9f9f-eebadd266b08\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742565 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742582 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d572b4ba-2f55-43ef-8b71-af94f9519768-serving-cert\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742600 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/67f85054-6343-454e-9f9f-eebadd266b08-machine-approver-tls\") pod \"machine-approver-56656f9798-92hfr\" (UID: \"67f85054-6343-454e-9f9f-eebadd266b08\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742629 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6525c86b-8810-4639-8d16-93d25fac15a9-client-ca\") pod \"route-controller-manager-6576b87f9c-t8tsq\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742654 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/238886b4-14ad-4a1c-8ba4-84b652601186-serving-cert\") pod \"authentication-operator-69f744f599-25rmn\" (UID: \"238886b4-14ad-4a1c-8ba4-84b652601186\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742670 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b0a0a1c1-7631-4b40-8a54-268af3d95cb6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-kn5fp\" (UID: \"b0a0a1c1-7631-4b40-8a54-268af3d95cb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742689 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw8cs\" (UniqueName: \"kubernetes.io/projected/c35257f3-6d8a-4917-a956-3b71a0e54c23-kube-api-access-bw8cs\") pod \"machine-api-operator-5694c8668f-mnwzz\" (UID: \"c35257f3-6d8a-4917-a956-3b71a0e54c23\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742704 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d572b4ba-2f55-43ef-8b71-af94f9519768-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742732 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742751 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgf8j\" (UniqueName: \"kubernetes.io/projected/67f85054-6343-454e-9f9f-eebadd266b08-kube-api-access-dgf8j\") pod \"machine-approver-56656f9798-92hfr\" (UID: \"67f85054-6343-454e-9f9f-eebadd266b08\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742773 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742795 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c35257f3-6d8a-4917-a956-3b71a0e54c23-images\") pod \"machine-api-operator-5694c8668f-mnwzz\" (UID: \"c35257f3-6d8a-4917-a956-3b71a0e54c23\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742819 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-audit-dir\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742837 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742856 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96kkw\" (UniqueName: \"kubernetes.io/projected/58192d0b-35de-4d58-8037-559360392628-kube-api-access-96kkw\") pod \"openshift-apiserver-operator-796bbdcf4f-w6s52\" (UID: \"58192d0b-35de-4d58-8037-559360392628\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742874 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742893 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c56430-35ae-4e7c-9f5a-108205dbe2b3-serving-cert\") pod \"console-operator-58897d9998-5zz49\" (UID: \"f1c56430-35ae-4e7c-9f5a-108205dbe2b3\") " pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742909 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/48998ce4-56d3-439e-90c5-c7caa4b8344f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fq5sj\" (UID: \"48998ce4-56d3-439e-90c5-c7caa4b8344f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fq5sj" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742931 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742949 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/238886b4-14ad-4a1c-8ba4-84b652601186-config\") pod \"authentication-operator-69f744f599-25rmn\" (UID: \"238886b4-14ad-4a1c-8ba4-84b652601186\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742969 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d572b4ba-2f55-43ef-8b71-af94f9519768-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.742976 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6525c86b-8810-4639-8d16-93d25fac15a9-config\") pod \"route-controller-manager-6576b87f9c-t8tsq\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.743066 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6525c86b-8810-4639-8d16-93d25fac15a9-serving-cert\") pod \"route-controller-manager-6576b87f9c-t8tsq\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.743300 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238886b4-14ad-4a1c-8ba4-84b652601186-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-25rmn\" (UID: \"238886b4-14ad-4a1c-8ba4-84b652601186\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.743950 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6525c86b-8810-4639-8d16-93d25fac15a9-config\") pod \"route-controller-manager-6576b87f9c-t8tsq\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.744263 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58192d0b-35de-4d58-8037-559360392628-config\") pod \"openshift-apiserver-operator-796bbdcf4f-w6s52\" (UID: \"58192d0b-35de-4d58-8037-559360392628\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.744315 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0a0a1c1-7631-4b40-8a54-268af3d95cb6-serving-cert\") pod \"openshift-config-operator-7777fb866f-kn5fp\" (UID: \"b0a0a1c1-7631-4b40-8a54-268af3d95cb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.744359 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.744395 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c35257f3-6d8a-4917-a956-3b71a0e54c23-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mnwzz\" (UID: \"c35257f3-6d8a-4917-a956-3b71a0e54c23\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.744426 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67f85054-6343-454e-9f9f-eebadd266b08-auth-proxy-config\") pod \"machine-approver-56656f9798-92hfr\" (UID: \"67f85054-6343-454e-9f9f-eebadd266b08\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.744630 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6525c86b-8810-4639-8d16-93d25fac15a9-client-ca\") pod \"route-controller-manager-6576b87f9c-t8tsq\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.744679 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-audit-dir\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.745226 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c35257f3-6d8a-4917-a956-3b71a0e54c23-images\") pod \"machine-api-operator-5694c8668f-mnwzz\" (UID: \"c35257f3-6d8a-4917-a956-3b71a0e54c23\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.745319 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67f85054-6343-454e-9f9f-eebadd266b08-auth-proxy-config\") pod \"machine-approver-56656f9798-92hfr\" (UID: \"67f85054-6343-454e-9f9f-eebadd266b08\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.745449 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d572b4ba-2f55-43ef-8b71-af94f9519768-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.747804 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35257f3-6d8a-4917-a956-3b71a0e54c23-config\") pod \"machine-api-operator-5694c8668f-mnwzz\" (UID: \"c35257f3-6d8a-4917-a956-3b71a0e54c23\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.747899 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-audit-policies\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.748011 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d572b4ba-2f55-43ef-8b71-af94f9519768-audit-dir\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.749463 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.749658 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1c56430-35ae-4e7c-9f5a-108205dbe2b3-trusted-ca\") pod \"console-operator-58897d9998-5zz49\" (UID: \"f1c56430-35ae-4e7c-9f5a-108205dbe2b3\") " pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.751502 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b0a0a1c1-7631-4b40-8a54-268af3d95cb6-available-featuregates\") pod \"openshift-config-operator-7777fb866f-kn5fp\" (UID: \"b0a0a1c1-7631-4b40-8a54-268af3d95cb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.751821 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.752839 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.752654 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f85054-6343-454e-9f9f-eebadd266b08-config\") pod \"machine-approver-56656f9798-92hfr\" (UID: \"67f85054-6343-454e-9f9f-eebadd266b08\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.753190 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d572b4ba-2f55-43ef-8b71-af94f9519768-audit-policies\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.753539 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.754067 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c56430-35ae-4e7c-9f5a-108205dbe2b3-config\") pod \"console-operator-58897d9998-5zz49\" (UID: \"f1c56430-35ae-4e7c-9f5a-108205dbe2b3\") " pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.754180 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/67f85054-6343-454e-9f9f-eebadd266b08-machine-approver-tls\") pod \"machine-approver-56656f9798-92hfr\" (UID: \"67f85054-6343-454e-9f9f-eebadd266b08\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.754352 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58192d0b-35de-4d58-8037-559360392628-config\") pod \"openshift-apiserver-operator-796bbdcf4f-w6s52\" (UID: \"58192d0b-35de-4d58-8037-559360392628\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.754731 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d572b4ba-2f55-43ef-8b71-af94f9519768-encryption-config\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.755242 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-25rmn"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.755271 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-mww4w"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.755358 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/238886b4-14ad-4a1c-8ba4-84b652601186-serving-cert\") pod \"authentication-operator-69f744f599-25rmn\" (UID: \"238886b4-14ad-4a1c-8ba4-84b652601186\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.755566 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.755887 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.756258 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.756305 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58192d0b-35de-4d58-8037-559360392628-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-w6s52\" (UID: \"58192d0b-35de-4d58-8037-559360392628\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.756399 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/238886b4-14ad-4a1c-8ba4-84b652601186-config\") pod \"authentication-operator-69f744f599-25rmn\" (UID: \"238886b4-14ad-4a1c-8ba4-84b652601186\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.756731 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.756776 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.756823 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.756840 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238886b4-14ad-4a1c-8ba4-84b652601186-service-ca-bundle\") pod \"authentication-operator-69f744f599-25rmn\" (UID: \"238886b4-14ad-4a1c-8ba4-84b652601186\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.757307 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.757389 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.757973 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.758139 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.758191 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c56430-35ae-4e7c-9f5a-108205dbe2b3-serving-cert\") pod \"console-operator-58897d9998-5zz49\" (UID: \"f1c56430-35ae-4e7c-9f5a-108205dbe2b3\") " pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.758811 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6525c86b-8810-4639-8d16-93d25fac15a9-serving-cert\") pod \"route-controller-manager-6576b87f9c-t8tsq\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.758877 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.759546 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/48998ce4-56d3-439e-90c5-c7caa4b8344f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fq5sj\" (UID: \"48998ce4-56d3-439e-90c5-c7caa4b8344f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fq5sj" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.760065 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.760360 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.762742 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.763087 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d572b4ba-2f55-43ef-8b71-af94f9519768-serving-cert\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.764700 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d572b4ba-2f55-43ef-8b71-af94f9519768-etcd-client\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.767757 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0a0a1c1-7631-4b40-8a54-268af3d95cb6-serving-cert\") pod \"openshift-config-operator-7777fb866f-kn5fp\" (UID: \"b0a0a1c1-7631-4b40-8a54-268af3d95cb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.769751 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.770383 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.770796 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.771841 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.787561 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.788319 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.789779 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.791983 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-srbvc"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.792404 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.793863 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.794134 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srbvc" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.797417 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.798340 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.803861 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.804063 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.805786 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c35257f3-6d8a-4917-a956-3b71a0e54c23-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mnwzz\" (UID: \"c35257f3-6d8a-4917-a956-3b71a0e54c23\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.806083 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-79kcs"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.807538 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.808433 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.809039 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.810199 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-79kcs" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.810539 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x4zpp"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.810931 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.812069 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.812284 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x4zpp" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.813933 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dzwvz"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.814895 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.815448 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.815966 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.816580 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.816859 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.817166 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2dct6"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.817370 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.817610 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.819536 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.821443 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.828487 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.829047 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.831254 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nkzc2"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.835527 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5zz49"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.838409 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-l7658"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.840307 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4zqn6"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.845178 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.845231 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.845427 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.850488 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.850613 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.855608 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9bbb7"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.857015 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.857969 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.860103 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-cvk5w"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.862094 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cvk5w" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.866283 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.867785 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.869087 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xdcw7"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.870168 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xdcw7" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.872509 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.873414 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fq5sj"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.874696 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5cllr"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.875419 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.876711 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.878649 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.879339 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.880994 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.883336 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xfs5k"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.883726 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x4zpp"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.886467 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2x4pn"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.888748 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-7ss5d"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.889673 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-7ss5d" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.892201 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.893772 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cvk5w"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.895044 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.895418 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.897033 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-srbvc"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.898790 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-79kcs"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.900150 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9bbb7"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.902015 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.904851 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xdcw7"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.907283 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dzwvz"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.907342 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.908842 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2dct6"] Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.948116 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990066 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5d4723c5-1628-4481-83b8-498fd4e5362e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990106 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9ftz\" (UniqueName: \"kubernetes.io/projected/193f8517-94f7-42fe-9fe2-0bdb69cc8424-kube-api-access-v9ftz\") pod \"downloads-7954f5f757-5cllr\" (UID: \"193f8517-94f7-42fe-9fe2-0bdb69cc8424\") " pod="openshift-console/downloads-7954f5f757-5cllr" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990134 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-registry-tls\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990151 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d4723c5-1628-4481-83b8-498fd4e5362e-trusted-ca\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990177 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0be5f3b8-eeae-405b-a836-e806531a57e0-console-oauth-config\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990199 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990236 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-trusted-ca-bundle\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990264 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-serving-cert\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990300 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-oauth-serving-cert\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990319 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-console-config\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990350 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/95b5195e-ccd9-451a-baf8-ee70aaa0e650-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d44mx\" (UID: \"95b5195e-ccd9-451a-baf8-ee70aaa0e650\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990397 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-service-ca\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990424 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5d4723c5-1628-4481-83b8-498fd4e5362e-registry-certificates\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990440 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0be5f3b8-eeae-405b-a836-e806531a57e0-console-serving-cert\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990463 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-config\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990478 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvmgm\" (UniqueName: \"kubernetes.io/projected/0be5f3b8-eeae-405b-a836-e806531a57e0-kube-api-access-fvmgm\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990521 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5d4723c5-1628-4481-83b8-498fd4e5362e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990549 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjgsk\" (UniqueName: \"kubernetes.io/projected/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-kube-api-access-vjgsk\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990577 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-etcd-client\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990614 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-etcd-service-ca\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990658 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-etcd-ca\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990672 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/95b5195e-ccd9-451a-baf8-ee70aaa0e650-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d44mx\" (UID: \"95b5195e-ccd9-451a-baf8-ee70aaa0e650\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990702 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-bound-sa-token\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990721 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4trd\" (UniqueName: \"kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-kube-api-access-x4trd\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990740 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/95b5195e-ccd9-451a-baf8-ee70aaa0e650-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d44mx\" (UID: \"95b5195e-ccd9-451a-baf8-ee70aaa0e650\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" Jan 21 13:04:54 crc kubenswrapper[4765]: I0121 13:04:54.990757 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wcb2\" (UniqueName: \"kubernetes.io/projected/95b5195e-ccd9-451a-baf8-ee70aaa0e650-kube-api-access-5wcb2\") pod \"cluster-image-registry-operator-dc59b4c8b-d44mx\" (UID: \"95b5195e-ccd9-451a-baf8-ee70aaa0e650\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" Jan 21 13:04:54 crc kubenswrapper[4765]: E0121 13:04:54.991084 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:55.491070946 +0000 UTC m=+156.508796768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.001593 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.002005 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.009876 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.029421 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.047686 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.067253 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.076200 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hg5vm"] Jan 21 13:04:55 crc kubenswrapper[4765]: W0121 13:04:55.081865 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cb01aaa_41aa_442c_b18c_d345abd3d3d9.slice/crio-9aeff027c1c847168938725ac21f00f8dbafbdcf6942e594fb6b07042e615199 WatchSource:0}: Error finding container 9aeff027c1c847168938725ac21f00f8dbafbdcf6942e594fb6b07042e615199: Status 404 returned error can't find the container with id 9aeff027c1c847168938725ac21f00f8dbafbdcf6942e594fb6b07042e615199 Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.087991 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.092719 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093065 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/95b5195e-ccd9-451a-baf8-ee70aaa0e650-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d44mx\" (UID: \"95b5195e-ccd9-451a-baf8-ee70aaa0e650\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093103 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxfs8\" (UniqueName: \"kubernetes.io/projected/7abbef71-8ead-4d5e-afc9-45a1195804cd-kube-api-access-bxfs8\") pod \"packageserver-d55dfcdfc-9w7tp\" (UID: \"7abbef71-8ead-4d5e-afc9-45a1195804cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093127 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/320c7cb7-c625-492f-9cab-d9f2858c5742-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rzw49\" (UID: \"320c7cb7-c625-492f-9cab-d9f2858c5742\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093148 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-config-volume\") pod \"collect-profiles-29483340-pnjtd\" (UID: \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093174 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-registry-tls\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093198 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzdrg\" (UniqueName: \"kubernetes.io/projected/02ed0834-ca94-42c2-a597-4f8b72d265b5-kube-api-access-hzdrg\") pod \"kube-storage-version-migrator-operator-b67b599dd-xjvvk\" (UID: \"02ed0834-ca94-42c2-a597-4f8b72d265b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093236 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/18432a06-6a3b-451d-87d6-42ca779acf9f-metrics-tls\") pod \"ingress-operator-5b745b69d9-hlkc2\" (UID: \"18432a06-6a3b-451d-87d6-42ca779acf9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093258 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d4723c5-1628-4481-83b8-498fd4e5362e-trusted-ca\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093279 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-secret-volume\") pod \"collect-profiles-29483340-pnjtd\" (UID: \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093297 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d6e131d4-811c-416f-bbc9-e83007e9a548-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rhvp6\" (UID: \"d6e131d4-811c-416f-bbc9-e83007e9a548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093318 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-trusted-ca-bundle\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093339 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/320c7cb7-c625-492f-9cab-d9f2858c5742-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rzw49\" (UID: \"320c7cb7-c625-492f-9cab-d9f2858c5742\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093359 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dab7fc80-6af1-4650-9cc6-875e36327b3f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6t7p\" (UID: \"dab7fc80-6af1-4650-9cc6-875e36327b3f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093389 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbpkt\" (UniqueName: \"kubernetes.io/projected/fc2d5125-b816-4500-a5f1-99e7fd676f23-kube-api-access-mbpkt\") pod \"migrator-59844c95c7-srbvc\" (UID: \"fc2d5125-b816-4500-a5f1-99e7fd676f23\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srbvc" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093407 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec425a1a-0a7b-43ad-bbcd-ae31a5176e2b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qxjpg\" (UID: \"ec425a1a-0a7b-43ad-bbcd-ae31a5176e2b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093430 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/17f0cd0d-b1e3-42d0-abde-21e830e40e5d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-79kcs\" (UID: \"17f0cd0d-b1e3-42d0-abde-21e830e40e5d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-79kcs" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093451 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-console-config\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093469 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/95b5195e-ccd9-451a-baf8-ee70aaa0e650-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d44mx\" (UID: \"95b5195e-ccd9-451a-baf8-ee70aaa0e650\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093489 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fss4k\" (UniqueName: \"kubernetes.io/projected/726a62c0-ba93-4fff-a141-09fefec9f93e-kube-api-access-fss4k\") pod \"machine-config-operator-74547568cd-mtskd\" (UID: \"726a62c0-ba93-4fff-a141-09fefec9f93e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093521 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5q9s\" (UniqueName: \"kubernetes.io/projected/eb34fa0a-229a-4ef4-815b-93d888c19e84-kube-api-access-j5q9s\") pod \"catalog-operator-68c6474976-fp5vd\" (UID: \"eb34fa0a-229a-4ef4-815b-93d888c19e84\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093545 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7abbef71-8ead-4d5e-afc9-45a1195804cd-webhook-cert\") pod \"packageserver-d55dfcdfc-9w7tp\" (UID: \"7abbef71-8ead-4d5e-afc9-45a1195804cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093566 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2mh2\" (UniqueName: \"kubernetes.io/projected/d6e131d4-811c-416f-bbc9-e83007e9a548-kube-api-access-f2mh2\") pod \"olm-operator-6b444d44fb-rhvp6\" (UID: \"d6e131d4-811c-416f-bbc9-e83007e9a548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093588 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bd0de6e-9060-46fd-9b0e-aac63b762b0d-proxy-tls\") pod \"machine-config-controller-84d6567774-r5c5g\" (UID: \"1bd0de6e-9060-46fd-9b0e-aac63b762b0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093608 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-registration-dir\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093631 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40064ef5-d679-4224-af54-a21488bbbb11-config\") pod \"kube-controller-manager-operator-78b949d7b-57jzj\" (UID: \"40064ef5-d679-4224-af54-a21488bbbb11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093657 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvmgm\" (UniqueName: \"kubernetes.io/projected/0be5f3b8-eeae-405b-a836-e806531a57e0-kube-api-access-fvmgm\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093675 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c-certs\") pod \"machine-config-server-7ss5d\" (UID: \"36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c\") " pod="openshift-machine-config-operator/machine-config-server-7ss5d" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093695 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/18432a06-6a3b-451d-87d6-42ca779acf9f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hlkc2\" (UID: \"18432a06-6a3b-451d-87d6-42ca779acf9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093715 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg9p7\" (UniqueName: \"kubernetes.io/projected/36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c-kube-api-access-jg9p7\") pod \"machine-config-server-7ss5d\" (UID: \"36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c\") " pod="openshift-machine-config-operator/machine-config-server-7ss5d" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093737 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/50ea39eb-559e-4298-9133-4d2a5c7890cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x4zpp\" (UID: \"50ea39eb-559e-4298-9133-4d2a5c7890cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x4zpp" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093757 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc4bm\" (UniqueName: \"kubernetes.io/projected/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-kube-api-access-fc4bm\") pod \"collect-profiles-29483340-pnjtd\" (UID: \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093777 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5d4723c5-1628-4481-83b8-498fd4e5362e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093800 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1bd0de6e-9060-46fd-9b0e-aac63b762b0d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-r5c5g\" (UID: \"1bd0de6e-9060-46fd-9b0e-aac63b762b0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093836 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjgsk\" (UniqueName: \"kubernetes.io/projected/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-kube-api-access-vjgsk\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093855 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7abbef71-8ead-4d5e-afc9-45a1195804cd-tmpfs\") pod \"packageserver-d55dfcdfc-9w7tp\" (UID: \"7abbef71-8ead-4d5e-afc9-45a1195804cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093875 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-etcd-client\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093904 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/576c7738-88c3-450e-b9c2-c291f73191b8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n9tp2\" (UID: \"576c7738-88c3-450e-b9c2-c291f73191b8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093925 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02ed0834-ca94-42c2-a597-4f8b72d265b5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xjvvk\" (UID: \"02ed0834-ca94-42c2-a597-4f8b72d265b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093943 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/576c7738-88c3-450e-b9c2-c291f73191b8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n9tp2\" (UID: \"576c7738-88c3-450e-b9c2-c291f73191b8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093962 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/726a62c0-ba93-4fff-a141-09fefec9f93e-images\") pod \"machine-config-operator-74547568cd-mtskd\" (UID: \"726a62c0-ba93-4fff-a141-09fefec9f93e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093982 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-etcd-ca\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.093999 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-462gv\" (UniqueName: \"kubernetes.io/projected/ec425a1a-0a7b-43ad-bbcd-ae31a5176e2b-kube-api-access-462gv\") pod \"package-server-manager-789f6589d5-qxjpg\" (UID: \"ec425a1a-0a7b-43ad-bbcd-ae31a5176e2b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094018 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-bound-sa-token\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094035 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eb34fa0a-229a-4ef4-815b-93d888c19e84-srv-cert\") pod \"catalog-operator-68c6474976-fp5vd\" (UID: \"eb34fa0a-229a-4ef4-815b-93d888c19e84\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094056 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4trd\" (UniqueName: \"kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-kube-api-access-x4trd\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094077 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/95b5195e-ccd9-451a-baf8-ee70aaa0e650-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d44mx\" (UID: \"95b5195e-ccd9-451a-baf8-ee70aaa0e650\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094096 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wcb2\" (UniqueName: \"kubernetes.io/projected/95b5195e-ccd9-451a-baf8-ee70aaa0e650-kube-api-access-5wcb2\") pod \"cluster-image-registry-operator-dc59b4c8b-d44mx\" (UID: \"95b5195e-ccd9-451a-baf8-ee70aaa0e650\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094114 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18432a06-6a3b-451d-87d6-42ca779acf9f-trusted-ca\") pod \"ingress-operator-5b745b69d9-hlkc2\" (UID: \"18432a06-6a3b-451d-87d6-42ca779acf9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094134 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dzwvz\" (UID: \"de62a4d5-de79-4ad5-983d-7071fb85dce8\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094150 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dzwvz\" (UID: \"de62a4d5-de79-4ad5-983d-7071fb85dce8\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094169 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-475rj\" (UniqueName: \"kubernetes.io/projected/be901288-fb35-4b18-a7a6-92bebcc7ff38-kube-api-access-475rj\") pod \"ingress-canary-cvk5w\" (UID: \"be901288-fb35-4b18-a7a6-92bebcc7ff38\") " pod="openshift-ingress-canary/ingress-canary-cvk5w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094185 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d6t9\" (UniqueName: \"kubernetes.io/projected/18432a06-6a3b-451d-87d6-42ca779acf9f-kube-api-access-8d6t9\") pod \"ingress-operator-5b745b69d9-hlkc2\" (UID: \"18432a06-6a3b-451d-87d6-42ca779acf9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094229 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5d4723c5-1628-4481-83b8-498fd4e5362e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094254 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9ftz\" (UniqueName: \"kubernetes.io/projected/193f8517-94f7-42fe-9fe2-0bdb69cc8424-kube-api-access-v9ftz\") pod \"downloads-7954f5f757-5cllr\" (UID: \"193f8517-94f7-42fe-9fe2-0bdb69cc8424\") " pod="openshift-console/downloads-7954f5f757-5cllr" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094280 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/eca25558-ed2d-42c5-bf06-b19d17fe60cf-signing-cabundle\") pod \"service-ca-9c57cc56f-2dct6\" (UID: \"eca25558-ed2d-42c5-bf06-b19d17fe60cf\") " pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094297 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7abbef71-8ead-4d5e-afc9-45a1195804cd-apiservice-cert\") pod \"packageserver-d55dfcdfc-9w7tp\" (UID: \"7abbef71-8ead-4d5e-afc9-45a1195804cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094315 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-csi-data-dir\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094333 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2w7s\" (UniqueName: \"kubernetes.io/projected/6f858ebc-0551-4b6c-86e5-ab124ca2b27f-kube-api-access-z2w7s\") pod \"dns-operator-744455d44c-4zqn6\" (UID: \"6f858ebc-0551-4b6c-86e5-ab124ca2b27f\") " pod="openshift-dns-operator/dns-operator-744455d44c-4zqn6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094357 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0be5f3b8-eeae-405b-a836-e806531a57e0-console-oauth-config\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094376 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdngn\" (UniqueName: \"kubernetes.io/projected/0327ce11-e740-472b-8037-095af6cad376-kube-api-access-vdngn\") pod \"dns-default-xdcw7\" (UID: \"0327ce11-e740-472b-8037-095af6cad376\") " pod="openshift-dns/dns-default-xdcw7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094394 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5bq2\" (UniqueName: \"kubernetes.io/projected/de62a4d5-de79-4ad5-983d-7071fb85dce8-kube-api-access-f5bq2\") pod \"marketplace-operator-79b997595-dzwvz\" (UID: \"de62a4d5-de79-4ad5-983d-7071fb85dce8\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094427 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m9rf\" (UniqueName: \"kubernetes.io/projected/576c7738-88c3-450e-b9c2-c291f73191b8-kube-api-access-5m9rf\") pod \"openshift-controller-manager-operator-756b6f6bc6-n9tp2\" (UID: \"576c7738-88c3-450e-b9c2-c291f73191b8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094450 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3506449d-eca0-49d4-8a8f-dc8bc347b258-serving-cert\") pod \"service-ca-operator-777779d784-f5w2t\" (UID: \"3506449d-eca0-49d4-8a8f-dc8bc347b258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094470 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgn96\" (UniqueName: \"kubernetes.io/projected/3506449d-eca0-49d4-8a8f-dc8bc347b258-kube-api-access-wgn96\") pod \"service-ca-operator-777779d784-f5w2t\" (UID: \"3506449d-eca0-49d4-8a8f-dc8bc347b258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094489 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-plugins-dir\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094507 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-489qb\" (UniqueName: \"kubernetes.io/projected/08434441-0009-483c-84b1-86d78ac699f4-kube-api-access-489qb\") pod \"router-default-5444994796-mww4w\" (UID: \"08434441-0009-483c-84b1-86d78ac699f4\") " pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094527 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-serving-cert\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094571 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svgx7\" (UniqueName: \"kubernetes.io/projected/1bd0de6e-9060-46fd-9b0e-aac63b762b0d-kube-api-access-svgx7\") pod \"machine-config-controller-84d6567774-r5c5g\" (UID: \"1bd0de6e-9060-46fd-9b0e-aac63b762b0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094590 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qt4r\" (UniqueName: \"kubernetes.io/projected/50ea39eb-559e-4298-9133-4d2a5c7890cb-kube-api-access-2qt4r\") pod \"control-plane-machine-set-operator-78cbb6b69f-x4zpp\" (UID: \"50ea39eb-559e-4298-9133-4d2a5c7890cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x4zpp" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094609 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3506449d-eca0-49d4-8a8f-dc8bc347b258-config\") pod \"service-ca-operator-777779d784-f5w2t\" (UID: \"3506449d-eca0-49d4-8a8f-dc8bc347b258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094628 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gklcm\" (UniqueName: \"kubernetes.io/projected/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-kube-api-access-gklcm\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094646 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08434441-0009-483c-84b1-86d78ac699f4-metrics-certs\") pod \"router-default-5444994796-mww4w\" (UID: \"08434441-0009-483c-84b1-86d78ac699f4\") " pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094665 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-mountpoint-dir\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094683 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/320c7cb7-c625-492f-9cab-d9f2858c5742-config\") pod \"kube-apiserver-operator-766d6c64bb-rzw49\" (UID: \"320c7cb7-c625-492f-9cab-d9f2858c5742\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094699 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dab7fc80-6af1-4650-9cc6-875e36327b3f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6t7p\" (UID: \"dab7fc80-6af1-4650-9cc6-875e36327b3f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094715 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/726a62c0-ba93-4fff-a141-09fefec9f93e-proxy-tls\") pod \"machine-config-operator-74547568cd-mtskd\" (UID: \"726a62c0-ba93-4fff-a141-09fefec9f93e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094732 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-oauth-serving-cert\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094751 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08434441-0009-483c-84b1-86d78ac699f4-service-ca-bundle\") pod \"router-default-5444994796-mww4w\" (UID: \"08434441-0009-483c-84b1-86d78ac699f4\") " pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094770 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6f858ebc-0551-4b6c-86e5-ab124ca2b27f-metrics-tls\") pod \"dns-operator-744455d44c-4zqn6\" (UID: \"6f858ebc-0551-4b6c-86e5-ab124ca2b27f\") " pod="openshift-dns-operator/dns-operator-744455d44c-4zqn6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094788 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/eca25558-ed2d-42c5-bf06-b19d17fe60cf-signing-key\") pod \"service-ca-9c57cc56f-2dct6\" (UID: \"eca25558-ed2d-42c5-bf06-b19d17fe60cf\") " pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094805 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdbdj\" (UniqueName: \"kubernetes.io/projected/17f0cd0d-b1e3-42d0-abde-21e830e40e5d-kube-api-access-zdbdj\") pod \"multus-admission-controller-857f4d67dd-79kcs\" (UID: \"17f0cd0d-b1e3-42d0-abde-21e830e40e5d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-79kcs" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094823 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40064ef5-d679-4224-af54-a21488bbbb11-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-57jzj\" (UID: \"40064ef5-d679-4224-af54-a21488bbbb11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094841 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-service-ca\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094866 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5d4723c5-1628-4481-83b8-498fd4e5362e-registry-certificates\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094883 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0be5f3b8-eeae-405b-a836-e806531a57e0-console-serving-cert\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094900 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be901288-fb35-4b18-a7a6-92bebcc7ff38-cert\") pod \"ingress-canary-cvk5w\" (UID: \"be901288-fb35-4b18-a7a6-92bebcc7ff38\") " pod="openshift-ingress-canary/ingress-canary-cvk5w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094917 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d6e131d4-811c-416f-bbc9-e83007e9a548-srv-cert\") pod \"olm-operator-6b444d44fb-rhvp6\" (UID: \"d6e131d4-811c-416f-bbc9-e83007e9a548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094936 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-config\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094951 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-socket-dir\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094977 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/08434441-0009-483c-84b1-86d78ac699f4-default-certificate\") pod \"router-default-5444994796-mww4w\" (UID: \"08434441-0009-483c-84b1-86d78ac699f4\") " pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.094997 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dab7fc80-6af1-4650-9cc6-875e36327b3f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6t7p\" (UID: \"dab7fc80-6af1-4650-9cc6-875e36327b3f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.095014 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/08434441-0009-483c-84b1-86d78ac699f4-stats-auth\") pod \"router-default-5444994796-mww4w\" (UID: \"08434441-0009-483c-84b1-86d78ac699f4\") " pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.095046 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40064ef5-d679-4224-af54-a21488bbbb11-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-57jzj\" (UID: \"40064ef5-d679-4224-af54-a21488bbbb11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.095067 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0327ce11-e740-472b-8037-095af6cad376-metrics-tls\") pod \"dns-default-xdcw7\" (UID: \"0327ce11-e740-472b-8037-095af6cad376\") " pod="openshift-dns/dns-default-xdcw7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.095092 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/726a62c0-ba93-4fff-a141-09fefec9f93e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mtskd\" (UID: \"726a62c0-ba93-4fff-a141-09fefec9f93e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.095110 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eb34fa0a-229a-4ef4-815b-93d888c19e84-profile-collector-cert\") pod \"catalog-operator-68c6474976-fp5vd\" (UID: \"eb34fa0a-229a-4ef4-815b-93d888c19e84\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.095128 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-etcd-service-ca\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.095147 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c-node-bootstrap-token\") pod \"machine-config-server-7ss5d\" (UID: \"36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c\") " pod="openshift-machine-config-operator/machine-config-server-7ss5d" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.095165 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02ed0834-ca94-42c2-a597-4f8b72d265b5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xjvvk\" (UID: \"02ed0834-ca94-42c2-a597-4f8b72d265b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.095183 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kctjn\" (UniqueName: \"kubernetes.io/projected/eca25558-ed2d-42c5-bf06-b19d17fe60cf-kube-api-access-kctjn\") pod \"service-ca-9c57cc56f-2dct6\" (UID: \"eca25558-ed2d-42c5-bf06-b19d17fe60cf\") " pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.095272 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0327ce11-e740-472b-8037-095af6cad376-config-volume\") pod \"dns-default-xdcw7\" (UID: \"0327ce11-e740-472b-8037-095af6cad376\") " pod="openshift-dns/dns-default-xdcw7" Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.095457 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:55.595434813 +0000 UTC m=+156.613160635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.096141 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-oauth-serving-cert\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.097624 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-service-ca\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.099310 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-console-config\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.099651 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5d4723c5-1628-4481-83b8-498fd4e5362e-registry-certificates\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.099872 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d4723c5-1628-4481-83b8-498fd4e5362e-trusted-ca\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.100162 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-trusted-ca-bundle\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.100548 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-config\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.100729 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-etcd-service-ca\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.102109 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-registry-tls\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.102164 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0be5f3b8-eeae-405b-a836-e806531a57e0-console-serving-cert\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.102921 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-etcd-ca\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.103145 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5d4723c5-1628-4481-83b8-498fd4e5362e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.103631 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/95b5195e-ccd9-451a-baf8-ee70aaa0e650-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d44mx\" (UID: \"95b5195e-ccd9-451a-baf8-ee70aaa0e650\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.104509 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-etcd-client\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.104533 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/95b5195e-ccd9-451a-baf8-ee70aaa0e650-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d44mx\" (UID: \"95b5195e-ccd9-451a-baf8-ee70aaa0e650\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.105286 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0be5f3b8-eeae-405b-a836-e806531a57e0-console-oauth-config\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.105614 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-serving-cert\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.106195 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5d4723c5-1628-4481-83b8-498fd4e5362e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.125773 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2vb6\" (UniqueName: \"kubernetes.io/projected/f1c56430-35ae-4e7c-9f5a-108205dbe2b3-kube-api-access-c2vb6\") pod \"console-operator-58897d9998-5zz49\" (UID: \"f1c56430-35ae-4e7c-9f5a-108205dbe2b3\") " pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.142932 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w98tv\" (UniqueName: \"kubernetes.io/projected/b0a0a1c1-7631-4b40-8a54-268af3d95cb6-kube-api-access-w98tv\") pod \"openshift-config-operator-7777fb866f-kn5fp\" (UID: \"b0a0a1c1-7631-4b40-8a54-268af3d95cb6\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.151823 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.162690 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk7g5\" (UniqueName: \"kubernetes.io/projected/6525c86b-8810-4639-8d16-93d25fac15a9-kube-api-access-nk7g5\") pod \"route-controller-manager-6576b87f9c-t8tsq\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.182758 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcf77\" (UniqueName: \"kubernetes.io/projected/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-kube-api-access-mcf77\") pod \"oauth-openshift-558db77b4-xfs5k\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.197820 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5bq2\" (UniqueName: \"kubernetes.io/projected/de62a4d5-de79-4ad5-983d-7071fb85dce8-kube-api-access-f5bq2\") pod \"marketplace-operator-79b997595-dzwvz\" (UID: \"de62a4d5-de79-4ad5-983d-7071fb85dce8\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.197878 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2w7s\" (UniqueName: \"kubernetes.io/projected/6f858ebc-0551-4b6c-86e5-ab124ca2b27f-kube-api-access-z2w7s\") pod \"dns-operator-744455d44c-4zqn6\" (UID: \"6f858ebc-0551-4b6c-86e5-ab124ca2b27f\") " pod="openshift-dns-operator/dns-operator-744455d44c-4zqn6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.197924 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdngn\" (UniqueName: \"kubernetes.io/projected/0327ce11-e740-472b-8037-095af6cad376-kube-api-access-vdngn\") pod \"dns-default-xdcw7\" (UID: \"0327ce11-e740-472b-8037-095af6cad376\") " pod="openshift-dns/dns-default-xdcw7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.197946 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.197967 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m9rf\" (UniqueName: \"kubernetes.io/projected/576c7738-88c3-450e-b9c2-c291f73191b8-kube-api-access-5m9rf\") pod \"openshift-controller-manager-operator-756b6f6bc6-n9tp2\" (UID: \"576c7738-88c3-450e-b9c2-c291f73191b8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198001 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-plugins-dir\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198019 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3506449d-eca0-49d4-8a8f-dc8bc347b258-serving-cert\") pod \"service-ca-operator-777779d784-f5w2t\" (UID: \"3506449d-eca0-49d4-8a8f-dc8bc347b258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198034 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgn96\" (UniqueName: \"kubernetes.io/projected/3506449d-eca0-49d4-8a8f-dc8bc347b258-kube-api-access-wgn96\") pod \"service-ca-operator-777779d784-f5w2t\" (UID: \"3506449d-eca0-49d4-8a8f-dc8bc347b258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198049 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3506449d-eca0-49d4-8a8f-dc8bc347b258-config\") pod \"service-ca-operator-777779d784-f5w2t\" (UID: \"3506449d-eca0-49d4-8a8f-dc8bc347b258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198093 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-489qb\" (UniqueName: \"kubernetes.io/projected/08434441-0009-483c-84b1-86d78ac699f4-kube-api-access-489qb\") pod \"router-default-5444994796-mww4w\" (UID: \"08434441-0009-483c-84b1-86d78ac699f4\") " pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198117 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svgx7\" (UniqueName: \"kubernetes.io/projected/1bd0de6e-9060-46fd-9b0e-aac63b762b0d-kube-api-access-svgx7\") pod \"machine-config-controller-84d6567774-r5c5g\" (UID: \"1bd0de6e-9060-46fd-9b0e-aac63b762b0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198196 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qt4r\" (UniqueName: \"kubernetes.io/projected/50ea39eb-559e-4298-9133-4d2a5c7890cb-kube-api-access-2qt4r\") pod \"control-plane-machine-set-operator-78cbb6b69f-x4zpp\" (UID: \"50ea39eb-559e-4298-9133-4d2a5c7890cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x4zpp" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198238 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gklcm\" (UniqueName: \"kubernetes.io/projected/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-kube-api-access-gklcm\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198253 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08434441-0009-483c-84b1-86d78ac699f4-metrics-certs\") pod \"router-default-5444994796-mww4w\" (UID: \"08434441-0009-483c-84b1-86d78ac699f4\") " pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198277 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-mountpoint-dir\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198309 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/320c7cb7-c625-492f-9cab-d9f2858c5742-config\") pod \"kube-apiserver-operator-766d6c64bb-rzw49\" (UID: \"320c7cb7-c625-492f-9cab-d9f2858c5742\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198323 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dab7fc80-6af1-4650-9cc6-875e36327b3f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6t7p\" (UID: \"dab7fc80-6af1-4650-9cc6-875e36327b3f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198343 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/726a62c0-ba93-4fff-a141-09fefec9f93e-proxy-tls\") pod \"machine-config-operator-74547568cd-mtskd\" (UID: \"726a62c0-ba93-4fff-a141-09fefec9f93e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198360 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08434441-0009-483c-84b1-86d78ac699f4-service-ca-bundle\") pod \"router-default-5444994796-mww4w\" (UID: \"08434441-0009-483c-84b1-86d78ac699f4\") " pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198390 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6f858ebc-0551-4b6c-86e5-ab124ca2b27f-metrics-tls\") pod \"dns-operator-744455d44c-4zqn6\" (UID: \"6f858ebc-0551-4b6c-86e5-ab124ca2b27f\") " pod="openshift-dns-operator/dns-operator-744455d44c-4zqn6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198412 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/eca25558-ed2d-42c5-bf06-b19d17fe60cf-signing-key\") pod \"service-ca-9c57cc56f-2dct6\" (UID: \"eca25558-ed2d-42c5-bf06-b19d17fe60cf\") " pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198428 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdbdj\" (UniqueName: \"kubernetes.io/projected/17f0cd0d-b1e3-42d0-abde-21e830e40e5d-kube-api-access-zdbdj\") pod \"multus-admission-controller-857f4d67dd-79kcs\" (UID: \"17f0cd0d-b1e3-42d0-abde-21e830e40e5d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-79kcs" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198443 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40064ef5-d679-4224-af54-a21488bbbb11-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-57jzj\" (UID: \"40064ef5-d679-4224-af54-a21488bbbb11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198475 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be901288-fb35-4b18-a7a6-92bebcc7ff38-cert\") pod \"ingress-canary-cvk5w\" (UID: \"be901288-fb35-4b18-a7a6-92bebcc7ff38\") " pod="openshift-ingress-canary/ingress-canary-cvk5w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198492 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d6e131d4-811c-416f-bbc9-e83007e9a548-srv-cert\") pod \"olm-operator-6b444d44fb-rhvp6\" (UID: \"d6e131d4-811c-416f-bbc9-e83007e9a548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198509 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-socket-dir\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198544 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/08434441-0009-483c-84b1-86d78ac699f4-default-certificate\") pod \"router-default-5444994796-mww4w\" (UID: \"08434441-0009-483c-84b1-86d78ac699f4\") " pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198560 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dab7fc80-6af1-4650-9cc6-875e36327b3f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6t7p\" (UID: \"dab7fc80-6af1-4650-9cc6-875e36327b3f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198575 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/08434441-0009-483c-84b1-86d78ac699f4-stats-auth\") pod \"router-default-5444994796-mww4w\" (UID: \"08434441-0009-483c-84b1-86d78ac699f4\") " pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198615 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40064ef5-d679-4224-af54-a21488bbbb11-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-57jzj\" (UID: \"40064ef5-d679-4224-af54-a21488bbbb11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198632 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0327ce11-e740-472b-8037-095af6cad376-metrics-tls\") pod \"dns-default-xdcw7\" (UID: \"0327ce11-e740-472b-8037-095af6cad376\") " pod="openshift-dns/dns-default-xdcw7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198647 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eb34fa0a-229a-4ef4-815b-93d888c19e84-profile-collector-cert\") pod \"catalog-operator-68c6474976-fp5vd\" (UID: \"eb34fa0a-229a-4ef4-815b-93d888c19e84\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198667 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/726a62c0-ba93-4fff-a141-09fefec9f93e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mtskd\" (UID: \"726a62c0-ba93-4fff-a141-09fefec9f93e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198701 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c-node-bootstrap-token\") pod \"machine-config-server-7ss5d\" (UID: \"36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c\") " pod="openshift-machine-config-operator/machine-config-server-7ss5d" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198716 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02ed0834-ca94-42c2-a597-4f8b72d265b5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xjvvk\" (UID: \"02ed0834-ca94-42c2-a597-4f8b72d265b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198734 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kctjn\" (UniqueName: \"kubernetes.io/projected/eca25558-ed2d-42c5-bf06-b19d17fe60cf-kube-api-access-kctjn\") pod \"service-ca-9c57cc56f-2dct6\" (UID: \"eca25558-ed2d-42c5-bf06-b19d17fe60cf\") " pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198786 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0327ce11-e740-472b-8037-095af6cad376-config-volume\") pod \"dns-default-xdcw7\" (UID: \"0327ce11-e740-472b-8037-095af6cad376\") " pod="openshift-dns/dns-default-xdcw7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198824 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxfs8\" (UniqueName: \"kubernetes.io/projected/7abbef71-8ead-4d5e-afc9-45a1195804cd-kube-api-access-bxfs8\") pod \"packageserver-d55dfcdfc-9w7tp\" (UID: \"7abbef71-8ead-4d5e-afc9-45a1195804cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198856 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/320c7cb7-c625-492f-9cab-d9f2858c5742-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rzw49\" (UID: \"320c7cb7-c625-492f-9cab-d9f2858c5742\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198874 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-config-volume\") pod \"collect-profiles-29483340-pnjtd\" (UID: \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198891 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzdrg\" (UniqueName: \"kubernetes.io/projected/02ed0834-ca94-42c2-a597-4f8b72d265b5-kube-api-access-hzdrg\") pod \"kube-storage-version-migrator-operator-b67b599dd-xjvvk\" (UID: \"02ed0834-ca94-42c2-a597-4f8b72d265b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198925 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/18432a06-6a3b-451d-87d6-42ca779acf9f-metrics-tls\") pod \"ingress-operator-5b745b69d9-hlkc2\" (UID: \"18432a06-6a3b-451d-87d6-42ca779acf9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198945 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-secret-volume\") pod \"collect-profiles-29483340-pnjtd\" (UID: \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198960 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d6e131d4-811c-416f-bbc9-e83007e9a548-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rhvp6\" (UID: \"d6e131d4-811c-416f-bbc9-e83007e9a548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.198977 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/320c7cb7-c625-492f-9cab-d9f2858c5742-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rzw49\" (UID: \"320c7cb7-c625-492f-9cab-d9f2858c5742\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.199008 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dab7fc80-6af1-4650-9cc6-875e36327b3f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6t7p\" (UID: \"dab7fc80-6af1-4650-9cc6-875e36327b3f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.199033 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbpkt\" (UniqueName: \"kubernetes.io/projected/fc2d5125-b816-4500-a5f1-99e7fd676f23-kube-api-access-mbpkt\") pod \"migrator-59844c95c7-srbvc\" (UID: \"fc2d5125-b816-4500-a5f1-99e7fd676f23\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srbvc" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.199080 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec425a1a-0a7b-43ad-bbcd-ae31a5176e2b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qxjpg\" (UID: \"ec425a1a-0a7b-43ad-bbcd-ae31a5176e2b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.199096 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/17f0cd0d-b1e3-42d0-abde-21e830e40e5d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-79kcs\" (UID: \"17f0cd0d-b1e3-42d0-abde-21e830e40e5d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-79kcs" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.199117 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fss4k\" (UniqueName: \"kubernetes.io/projected/726a62c0-ba93-4fff-a141-09fefec9f93e-kube-api-access-fss4k\") pod \"machine-config-operator-74547568cd-mtskd\" (UID: \"726a62c0-ba93-4fff-a141-09fefec9f93e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.199161 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5q9s\" (UniqueName: \"kubernetes.io/projected/eb34fa0a-229a-4ef4-815b-93d888c19e84-kube-api-access-j5q9s\") pod \"catalog-operator-68c6474976-fp5vd\" (UID: \"eb34fa0a-229a-4ef4-815b-93d888c19e84\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.199179 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7abbef71-8ead-4d5e-afc9-45a1195804cd-webhook-cert\") pod \"packageserver-d55dfcdfc-9w7tp\" (UID: \"7abbef71-8ead-4d5e-afc9-45a1195804cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.199194 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2mh2\" (UniqueName: \"kubernetes.io/projected/d6e131d4-811c-416f-bbc9-e83007e9a548-kube-api-access-f2mh2\") pod \"olm-operator-6b444d44fb-rhvp6\" (UID: \"d6e131d4-811c-416f-bbc9-e83007e9a548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.199559 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-mountpoint-dir\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.200157 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/320c7cb7-c625-492f-9cab-d9f2858c5742-config\") pod \"kube-apiserver-operator-766d6c64bb-rzw49\" (UID: \"320c7cb7-c625-492f-9cab-d9f2858c5742\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.200672 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-plugins-dir\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.200713 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-socket-dir\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.201523 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/726a62c0-ba93-4fff-a141-09fefec9f93e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mtskd\" (UID: \"726a62c0-ba93-4fff-a141-09fefec9f93e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202171 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bd0de6e-9060-46fd-9b0e-aac63b762b0d-proxy-tls\") pod \"machine-config-controller-84d6567774-r5c5g\" (UID: \"1bd0de6e-9060-46fd-9b0e-aac63b762b0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202247 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-registration-dir\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202273 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40064ef5-d679-4224-af54-a21488bbbb11-config\") pod \"kube-controller-manager-operator-78b949d7b-57jzj\" (UID: \"40064ef5-d679-4224-af54-a21488bbbb11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202319 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c-certs\") pod \"machine-config-server-7ss5d\" (UID: \"36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c\") " pod="openshift-machine-config-operator/machine-config-server-7ss5d" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202353 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/18432a06-6a3b-451d-87d6-42ca779acf9f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hlkc2\" (UID: \"18432a06-6a3b-451d-87d6-42ca779acf9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202369 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc4bm\" (UniqueName: \"kubernetes.io/projected/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-kube-api-access-fc4bm\") pod \"collect-profiles-29483340-pnjtd\" (UID: \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202414 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg9p7\" (UniqueName: \"kubernetes.io/projected/36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c-kube-api-access-jg9p7\") pod \"machine-config-server-7ss5d\" (UID: \"36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c\") " pod="openshift-machine-config-operator/machine-config-server-7ss5d" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202434 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/50ea39eb-559e-4298-9133-4d2a5c7890cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x4zpp\" (UID: \"50ea39eb-559e-4298-9133-4d2a5c7890cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x4zpp" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202473 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1bd0de6e-9060-46fd-9b0e-aac63b762b0d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-r5c5g\" (UID: \"1bd0de6e-9060-46fd-9b0e-aac63b762b0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202508 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7abbef71-8ead-4d5e-afc9-45a1195804cd-tmpfs\") pod \"packageserver-d55dfcdfc-9w7tp\" (UID: \"7abbef71-8ead-4d5e-afc9-45a1195804cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.202525 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:55.702506374 +0000 UTC m=+156.720232196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202594 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/576c7738-88c3-450e-b9c2-c291f73191b8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n9tp2\" (UID: \"576c7738-88c3-450e-b9c2-c291f73191b8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202626 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02ed0834-ca94-42c2-a597-4f8b72d265b5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xjvvk\" (UID: \"02ed0834-ca94-42c2-a597-4f8b72d265b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202663 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/576c7738-88c3-450e-b9c2-c291f73191b8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n9tp2\" (UID: \"576c7738-88c3-450e-b9c2-c291f73191b8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202686 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/726a62c0-ba93-4fff-a141-09fefec9f93e-images\") pod \"machine-config-operator-74547568cd-mtskd\" (UID: \"726a62c0-ba93-4fff-a141-09fefec9f93e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202716 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-462gv\" (UniqueName: \"kubernetes.io/projected/ec425a1a-0a7b-43ad-bbcd-ae31a5176e2b-kube-api-access-462gv\") pod \"package-server-manager-789f6589d5-qxjpg\" (UID: \"ec425a1a-0a7b-43ad-bbcd-ae31a5176e2b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202770 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eb34fa0a-229a-4ef4-815b-93d888c19e84-srv-cert\") pod \"catalog-operator-68c6474976-fp5vd\" (UID: \"eb34fa0a-229a-4ef4-815b-93d888c19e84\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202808 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18432a06-6a3b-451d-87d6-42ca779acf9f-trusted-ca\") pod \"ingress-operator-5b745b69d9-hlkc2\" (UID: \"18432a06-6a3b-451d-87d6-42ca779acf9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202826 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dzwvz\" (UID: \"de62a4d5-de79-4ad5-983d-7071fb85dce8\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202847 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dzwvz\" (UID: \"de62a4d5-de79-4ad5-983d-7071fb85dce8\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202870 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-475rj\" (UniqueName: \"kubernetes.io/projected/be901288-fb35-4b18-a7a6-92bebcc7ff38-kube-api-access-475rj\") pod \"ingress-canary-cvk5w\" (UID: \"be901288-fb35-4b18-a7a6-92bebcc7ff38\") " pod="openshift-ingress-canary/ingress-canary-cvk5w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202888 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d6t9\" (UniqueName: \"kubernetes.io/projected/18432a06-6a3b-451d-87d6-42ca779acf9f-kube-api-access-8d6t9\") pod \"ingress-operator-5b745b69d9-hlkc2\" (UID: \"18432a06-6a3b-451d-87d6-42ca779acf9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202937 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/eca25558-ed2d-42c5-bf06-b19d17fe60cf-signing-cabundle\") pod \"service-ca-9c57cc56f-2dct6\" (UID: \"eca25558-ed2d-42c5-bf06-b19d17fe60cf\") " pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202955 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7abbef71-8ead-4d5e-afc9-45a1195804cd-apiservice-cert\") pod \"packageserver-d55dfcdfc-9w7tp\" (UID: \"7abbef71-8ead-4d5e-afc9-45a1195804cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202980 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-csi-data-dir\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.203186 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-csi-data-dir\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.203768 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/576c7738-88c3-450e-b9c2-c291f73191b8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n9tp2\" (UID: \"576c7738-88c3-450e-b9c2-c291f73191b8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.202429 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-registration-dir\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.204110 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6f858ebc-0551-4b6c-86e5-ab124ca2b27f-metrics-tls\") pod \"dns-operator-744455d44c-4zqn6\" (UID: \"6f858ebc-0551-4b6c-86e5-ab124ca2b27f\") " pod="openshift-dns-operator/dns-operator-744455d44c-4zqn6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.204754 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1bd0de6e-9060-46fd-9b0e-aac63b762b0d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-r5c5g\" (UID: \"1bd0de6e-9060-46fd-9b0e-aac63b762b0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.205072 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7abbef71-8ead-4d5e-afc9-45a1195804cd-tmpfs\") pod \"packageserver-d55dfcdfc-9w7tp\" (UID: \"7abbef71-8ead-4d5e-afc9-45a1195804cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.206598 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/320c7cb7-c625-492f-9cab-d9f2858c5742-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rzw49\" (UID: \"320c7cb7-c625-492f-9cab-d9f2858c5742\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.207669 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/576c7738-88c3-450e-b9c2-c291f73191b8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n9tp2\" (UID: \"576c7738-88c3-450e-b9c2-c291f73191b8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.208051 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96kkw\" (UniqueName: \"kubernetes.io/projected/58192d0b-35de-4d58-8037-559360392628-kube-api-access-96kkw\") pod \"openshift-apiserver-operator-796bbdcf4f-w6s52\" (UID: \"58192d0b-35de-4d58-8037-559360392628\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.222604 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47mlv\" (UniqueName: \"kubernetes.io/projected/d572b4ba-2f55-43ef-8b71-af94f9519768-kube-api-access-47mlv\") pod \"apiserver-7bbb656c7d-gcs5c\" (UID: \"d572b4ba-2f55-43ef-8b71-af94f9519768\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.245602 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9xk6\" (UniqueName: \"kubernetes.io/projected/238886b4-14ad-4a1c-8ba4-84b652601186-kube-api-access-v9xk6\") pod \"authentication-operator-69f744f599-25rmn\" (UID: \"238886b4-14ad-4a1c-8ba4-84b652601186\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.250871 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.267234 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg4zb\" (UniqueName: \"kubernetes.io/projected/48998ce4-56d3-439e-90c5-c7caa4b8344f-kube-api-access-gg4zb\") pod \"cluster-samples-operator-665b6dd947-fq5sj\" (UID: \"48998ce4-56d3-439e-90c5-c7caa4b8344f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fq5sj" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.269435 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fq5sj" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.290161 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgf8j\" (UniqueName: \"kubernetes.io/projected/67f85054-6343-454e-9f9f-eebadd266b08-kube-api-access-dgf8j\") pod \"machine-approver-56656f9798-92hfr\" (UID: \"67f85054-6343-454e-9f9f-eebadd266b08\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.304355 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.304612 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:55.804584168 +0000 UTC m=+156.822309990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.304981 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.305429 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:55.805412221 +0000 UTC m=+156.823138113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.308656 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw8cs\" (UniqueName: \"kubernetes.io/projected/c35257f3-6d8a-4917-a956-3b71a0e54c23-kube-api-access-bw8cs\") pod \"machine-api-operator-5694c8668f-mnwzz\" (UID: \"c35257f3-6d8a-4917-a956-3b71a0e54c23\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.308843 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.329181 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.349133 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.349348 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp"] Jan 21 13:04:55 crc kubenswrapper[4765]: W0121 13:04:55.357223 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0a0a1c1_7631_4b40_8a54_268af3d95cb6.slice/crio-f4228cb48e9825bb13a5b11fc5d1a54c24c67d09d0f1a7be82964d13bb543064 WatchSource:0}: Error finding container f4228cb48e9825bb13a5b11fc5d1a54c24c67d09d0f1a7be82964d13bb543064: Status 404 returned error can't find the container with id f4228cb48e9825bb13a5b11fc5d1a54c24c67d09d0f1a7be82964d13bb543064 Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.368834 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.382851 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/08434441-0009-483c-84b1-86d78ac699f4-default-certificate\") pod \"router-default-5444994796-mww4w\" (UID: \"08434441-0009-483c-84b1-86d78ac699f4\") " pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.388078 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.394889 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/08434441-0009-483c-84b1-86d78ac699f4-stats-auth\") pod \"router-default-5444994796-mww4w\" (UID: \"08434441-0009-483c-84b1-86d78ac699f4\") " pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.399630 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.406084 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.407065 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.407239 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:55.907218697 +0000 UTC m=+156.924944519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.407743 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.408054 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.408270 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:55.908256446 +0000 UTC m=+156.925982268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.418003 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08434441-0009-483c-84b1-86d78ac699f4-metrics-certs\") pod \"router-default-5444994796-mww4w\" (UID: \"08434441-0009-483c-84b1-86d78ac699f4\") " pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.421354 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.429264 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.431607 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08434441-0009-483c-84b1-86d78ac699f4-service-ca-bundle\") pod \"router-default-5444994796-mww4w\" (UID: \"08434441-0009-483c-84b1-86d78ac699f4\") " pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.443168 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.448647 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.462077 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fq5sj"] Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.470355 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.474363 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.478904 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40064ef5-d679-4224-af54-a21488bbbb11-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-57jzj\" (UID: \"40064ef5-d679-4224-af54-a21488bbbb11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.479398 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.490446 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.509465 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.509887 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.510026 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.00999871 +0000 UTC m=+157.027724542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.511340 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.511768 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.011754149 +0000 UTC m=+157.029480131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.515298 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40064ef5-d679-4224-af54-a21488bbbb11-config\") pod \"kube-controller-manager-operator-78b949d7b-57jzj\" (UID: \"40064ef5-d679-4224-af54-a21488bbbb11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.521237 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.522737 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5zz49"] Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.527831 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.560722 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.567458 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.574916 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dab7fc80-6af1-4650-9cc6-875e36327b3f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6t7p\" (UID: \"dab7fc80-6af1-4650-9cc6-875e36327b3f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.588221 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.591078 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dab7fc80-6af1-4650-9cc6-875e36327b3f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6t7p\" (UID: \"dab7fc80-6af1-4650-9cc6-875e36327b3f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.608960 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.612838 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.613818 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.113784491 +0000 UTC m=+157.131510493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.628156 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.647785 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.654622 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/18432a06-6a3b-451d-87d6-42ca779acf9f-metrics-tls\") pod \"ingress-operator-5b745b69d9-hlkc2\" (UID: \"18432a06-6a3b-451d-87d6-42ca779acf9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.668529 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.671887 4765 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.672077 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-client-ca podName:fc58cdb9-8e5c-426c-a193-994e3b2ce117 nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.172052177 +0000 UTC m=+157.189777999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-client-ca") pod "controller-manager-879f6c89f-dhjpc" (UID: "fc58cdb9-8e5c-426c-a193-994e3b2ce117") : failed to sync configmap cache: timed out waiting for the condition Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.696255 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.708051 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18432a06-6a3b-451d-87d6-42ca779acf9f-trusted-ca\") pod \"ingress-operator-5b745b69d9-hlkc2\" (UID: \"18432a06-6a3b-451d-87d6-42ca779acf9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.708584 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.715493 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.716123 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.216096041 +0000 UTC m=+157.233821873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.716710 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1bd0de6e-9060-46fd-9b0e-aac63b762b0d-proxy-tls\") pod \"machine-config-controller-84d6567774-r5c5g\" (UID: \"1bd0de6e-9060-46fd-9b0e-aac63b762b0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.728346 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.748097 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.768153 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.787728 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.795096 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d6e131d4-811c-416f-bbc9-e83007e9a548-srv-cert\") pod \"olm-operator-6b444d44fb-rhvp6\" (UID: \"d6e131d4-811c-416f-bbc9-e83007e9a548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.806493 4765 request.go:700] Waited for 1.013364484s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0 Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.809051 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.814328 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d6e131d4-811c-416f-bbc9-e83007e9a548-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rhvp6\" (UID: \"d6e131d4-811c-416f-bbc9-e83007e9a548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.814555 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-secret-volume\") pod \"collect-profiles-29483340-pnjtd\" (UID: \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.816157 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eb34fa0a-229a-4ef4-815b-93d888c19e84-profile-collector-cert\") pod \"catalog-operator-68c6474976-fp5vd\" (UID: \"eb34fa0a-229a-4ef4-815b-93d888c19e84\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.816748 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.816907 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.3168839 +0000 UTC m=+157.334609742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.817192 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.817557 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.317544968 +0000 UTC m=+157.335270900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.829628 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.849673 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.871044 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.888762 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.907959 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.914925 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02ed0834-ca94-42c2-a597-4f8b72d265b5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xjvvk\" (UID: \"02ed0834-ca94-42c2-a597-4f8b72d265b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.919040 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.919196 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.419168439 +0000 UTC m=+157.436894261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.919302 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:55 crc kubenswrapper[4765]: E0121 13:04:55.919621 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.419613401 +0000 UTC m=+157.437339223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.929912 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.949195 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.968796 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.973562 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02ed0834-ca94-42c2-a597-4f8b72d265b5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xjvvk\" (UID: \"02ed0834-ca94-42c2-a597-4f8b72d265b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk" Jan 21 13:04:55 crc kubenswrapper[4765]: I0121 13:04:55.989595 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.008299 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.016121 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec425a1a-0a7b-43ad-bbcd-ae31a5176e2b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qxjpg\" (UID: \"ec425a1a-0a7b-43ad-bbcd-ae31a5176e2b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.021145 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.021360 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.521344476 +0000 UTC m=+157.539070298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.021626 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.022034 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.522020654 +0000 UTC m=+157.539746476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.028437 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.034895 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/726a62c0-ba93-4fff-a141-09fefec9f93e-images\") pod \"machine-config-operator-74547568cd-mtskd\" (UID: \"726a62c0-ba93-4fff-a141-09fefec9f93e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.041433 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" event={"ID":"b0a0a1c1-7631-4b40-8a54-268af3d95cb6","Type":"ContainerStarted","Data":"f4228cb48e9825bb13a5b11fc5d1a54c24c67d09d0f1a7be82964d13bb543064"} Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.042033 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" event={"ID":"7cb01aaa-41aa-442c-b18c-d345abd3d3d9","Type":"ContainerStarted","Data":"9aeff027c1c847168938725ac21f00f8dbafbdcf6942e594fb6b07042e615199"} Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.047983 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.067862 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.076960 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/726a62c0-ba93-4fff-a141-09fefec9f93e-proxy-tls\") pod \"machine-config-operator-74547568cd-mtskd\" (UID: \"726a62c0-ba93-4fff-a141-09fefec9f93e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.087673 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.096931 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/17f0cd0d-b1e3-42d0-abde-21e830e40e5d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-79kcs\" (UID: \"17f0cd0d-b1e3-42d0-abde-21e830e40e5d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-79kcs" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.108679 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.117702 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eb34fa0a-229a-4ef4-815b-93d888c19e84-srv-cert\") pod \"catalog-operator-68c6474976-fp5vd\" (UID: \"eb34fa0a-229a-4ef4-815b-93d888c19e84\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.122872 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.123076 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.623044368 +0000 UTC m=+157.640770190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.123616 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.123928 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.623916542 +0000 UTC m=+157.641642364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.127903 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.149744 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.169350 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.189286 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.199276 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7abbef71-8ead-4d5e-afc9-45a1195804cd-webhook-cert\") pod \"packageserver-d55dfcdfc-9w7tp\" (UID: \"7abbef71-8ead-4d5e-afc9-45a1195804cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.200125 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7abbef71-8ead-4d5e-afc9-45a1195804cd-apiservice-cert\") pod \"packageserver-d55dfcdfc-9w7tp\" (UID: \"7abbef71-8ead-4d5e-afc9-45a1195804cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.200708 4765 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.200772 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be901288-fb35-4b18-a7a6-92bebcc7ff38-cert podName:be901288-fb35-4b18-a7a6-92bebcc7ff38 nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.70075141 +0000 UTC m=+157.718477232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/be901288-fb35-4b18-a7a6-92bebcc7ff38-cert") pod "ingress-canary-cvk5w" (UID: "be901288-fb35-4b18-a7a6-92bebcc7ff38") : failed to sync secret cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.200717 4765 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.200983 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eca25558-ed2d-42c5-bf06-b19d17fe60cf-signing-key podName:eca25558-ed2d-42c5-bf06-b19d17fe60cf nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.700972046 +0000 UTC m=+157.718697868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/eca25558-ed2d-42c5-bf06-b19d17fe60cf-signing-key") pod "service-ca-9c57cc56f-2dct6" (UID: "eca25558-ed2d-42c5-bf06-b19d17fe60cf") : failed to sync secret cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.201051 4765 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.200873 4765 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.201246 4765 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.201003 4765 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.201193 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0327ce11-e740-472b-8037-095af6cad376-metrics-tls podName:0327ce11-e740-472b-8037-095af6cad376 nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.701159591 +0000 UTC m=+157.718885453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/0327ce11-e740-472b-8037-095af6cad376-metrics-tls") pod "dns-default-xdcw7" (UID: "0327ce11-e740-472b-8037-095af6cad376") : failed to sync secret cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.201295 4765 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.201303 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3506449d-eca0-49d4-8a8f-dc8bc347b258-serving-cert podName:3506449d-eca0-49d4-8a8f-dc8bc347b258 nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.701287775 +0000 UTC m=+157.719013677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3506449d-eca0-49d4-8a8f-dc8bc347b258-serving-cert") pod "service-ca-operator-777779d784-f5w2t" (UID: "3506449d-eca0-49d4-8a8f-dc8bc347b258") : failed to sync secret cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.201383 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-config-volume podName:561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.701363607 +0000 UTC m=+157.719089459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-config-volume") pod "collect-profiles-29483340-pnjtd" (UID: "561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b") : failed to sync configmap cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.201428 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3506449d-eca0-49d4-8a8f-dc8bc347b258-config podName:3506449d-eca0-49d4-8a8f-dc8bc347b258 nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.701414848 +0000 UTC m=+157.719140710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/3506449d-eca0-49d4-8a8f-dc8bc347b258-config") pod "service-ca-operator-777779d784-f5w2t" (UID: "3506449d-eca0-49d4-8a8f-dc8bc347b258") : failed to sync configmap cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.201453 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0327ce11-e740-472b-8037-095af6cad376-config-volume podName:0327ce11-e740-472b-8037-095af6cad376 nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.701442349 +0000 UTC m=+157.719168271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0327ce11-e740-472b-8037-095af6cad376-config-volume") pod "dns-default-xdcw7" (UID: "0327ce11-e740-472b-8037-095af6cad376") : failed to sync configmap cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.201871 4765 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.201994 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c-node-bootstrap-token podName:36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.701969104 +0000 UTC m=+157.719694966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c-node-bootstrap-token") pod "machine-config-server-7ss5d" (UID: "36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c") : failed to sync secret cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.204559 4765 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.204647 4765 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.204748 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-operator-metrics podName:de62a4d5-de79-4ad5-983d-7071fb85dce8 nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.70473295 +0000 UTC m=+157.722458802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-operator-metrics") pod "marketplace-operator-79b997595-dzwvz" (UID: "de62a4d5-de79-4ad5-983d-7071fb85dce8") : failed to sync secret cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.204755 4765 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.204805 4765 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.204868 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c-certs podName:36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.704853563 +0000 UTC m=+157.722579435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c-certs") pod "machine-config-server-7ss5d" (UID: "36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c") : failed to sync secret cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.205081 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-trusted-ca podName:de62a4d5-de79-4ad5-983d-7071fb85dce8 nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.705070079 +0000 UTC m=+157.722795901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-trusted-ca") pod "marketplace-operator-79b997595-dzwvz" (UID: "de62a4d5-de79-4ad5-983d-7071fb85dce8") : failed to sync configmap cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.205150 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eca25558-ed2d-42c5-bf06-b19d17fe60cf-signing-cabundle podName:eca25558-ed2d-42c5-bf06-b19d17fe60cf nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.705141751 +0000 UTC m=+157.722867573 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/eca25558-ed2d-42c5-bf06-b19d17fe60cf-signing-cabundle") pod "service-ca-9c57cc56f-2dct6" (UID: "eca25558-ed2d-42c5-bf06-b19d17fe60cf") : failed to sync configmap cache: timed out waiting for the condition Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.208785 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.224440 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.224573 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.724543916 +0000 UTC m=+157.742269768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.225960 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-client-ca\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.226525 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.227048 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.727031035 +0000 UTC m=+157.744756857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.228092 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.248137 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.268998 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.295297 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.307920 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.327831 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.327984 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.827955867 +0000 UTC m=+157.845681689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.328501 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.328555 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.329027 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.829014116 +0000 UTC m=+157.846739948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.350129 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.368529 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.388439 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.409092 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.429629 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.429949 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.430291 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.930257187 +0000 UTC m=+157.947983049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.430743 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.431259 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:56.931241744 +0000 UTC m=+157.948967556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.448844 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.473249 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.488363 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.510088 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.528153 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.533126 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.533610 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:57.033578155 +0000 UTC m=+158.051303977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.533865 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.534335 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:57.034322195 +0000 UTC m=+158.052048017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.549565 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.569069 4765 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.588617 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.608038 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.628532 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.633448 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/50ea39eb-559e-4298-9133-4d2a5c7890cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x4zpp\" (UID: \"50ea39eb-559e-4298-9133-4d2a5c7890cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x4zpp" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.635000 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.635185 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:57.135156984 +0000 UTC m=+158.152882816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.635553 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.636455 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:57.136417559 +0000 UTC m=+158.154143421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.648847 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.669887 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.689779 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.711061 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.730394 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 13:04:56 crc kubenswrapper[4765]: W0121 13:04:56.735385 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67f85054_6343_454e_9f9f_eebadd266b08.slice/crio-4b32a5b267f9cebd43eeab1b9ae1a95a61bbd7157ce5816ca333401c04a9f8be WatchSource:0}: Error finding container 4b32a5b267f9cebd43eeab1b9ae1a95a61bbd7157ce5816ca333401c04a9f8be: Status 404 returned error can't find the container with id 4b32a5b267f9cebd43eeab1b9ae1a95a61bbd7157ce5816ca333401c04a9f8be Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.737346 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.737657 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:57.237624389 +0000 UTC m=+158.255350351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.737745 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c-certs\") pod \"machine-config-server-7ss5d\" (UID: \"36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c\") " pod="openshift-machine-config-operator/machine-config-server-7ss5d" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.737841 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dzwvz\" (UID: \"de62a4d5-de79-4ad5-983d-7071fb85dce8\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.737869 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dzwvz\" (UID: \"de62a4d5-de79-4ad5-983d-7071fb85dce8\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.737921 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/eca25558-ed2d-42c5-bf06-b19d17fe60cf-signing-cabundle\") pod \"service-ca-9c57cc56f-2dct6\" (UID: \"eca25558-ed2d-42c5-bf06-b19d17fe60cf\") " pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.737978 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.738033 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3506449d-eca0-49d4-8a8f-dc8bc347b258-serving-cert\") pod \"service-ca-operator-777779d784-f5w2t\" (UID: \"3506449d-eca0-49d4-8a8f-dc8bc347b258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.738065 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3506449d-eca0-49d4-8a8f-dc8bc347b258-config\") pod \"service-ca-operator-777779d784-f5w2t\" (UID: \"3506449d-eca0-49d4-8a8f-dc8bc347b258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.738114 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/eca25558-ed2d-42c5-bf06-b19d17fe60cf-signing-key\") pod \"service-ca-9c57cc56f-2dct6\" (UID: \"eca25558-ed2d-42c5-bf06-b19d17fe60cf\") " pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.738147 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be901288-fb35-4b18-a7a6-92bebcc7ff38-cert\") pod \"ingress-canary-cvk5w\" (UID: \"be901288-fb35-4b18-a7a6-92bebcc7ff38\") " pod="openshift-ingress-canary/ingress-canary-cvk5w" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.738184 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0327ce11-e740-472b-8037-095af6cad376-metrics-tls\") pod \"dns-default-xdcw7\" (UID: \"0327ce11-e740-472b-8037-095af6cad376\") " pod="openshift-dns/dns-default-xdcw7" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.738242 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c-node-bootstrap-token\") pod \"machine-config-server-7ss5d\" (UID: \"36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c\") " pod="openshift-machine-config-operator/machine-config-server-7ss5d" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.738311 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0327ce11-e740-472b-8037-095af6cad376-config-volume\") pod \"dns-default-xdcw7\" (UID: \"0327ce11-e740-472b-8037-095af6cad376\") " pod="openshift-dns/dns-default-xdcw7" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.738396 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-config-volume\") pod \"collect-profiles-29483340-pnjtd\" (UID: \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.740092 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/eca25558-ed2d-42c5-bf06-b19d17fe60cf-signing-cabundle\") pod \"service-ca-9c57cc56f-2dct6\" (UID: \"eca25558-ed2d-42c5-bf06-b19d17fe60cf\") " pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.740624 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:57.240606871 +0000 UTC m=+158.258332883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.740942 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0327ce11-e740-472b-8037-095af6cad376-config-volume\") pod \"dns-default-xdcw7\" (UID: \"0327ce11-e740-472b-8037-095af6cad376\") " pod="openshift-dns/dns-default-xdcw7" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.741060 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3506449d-eca0-49d4-8a8f-dc8bc347b258-config\") pod \"service-ca-operator-777779d784-f5w2t\" (UID: \"3506449d-eca0-49d4-8a8f-dc8bc347b258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.742100 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dzwvz\" (UID: \"de62a4d5-de79-4ad5-983d-7071fb85dce8\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.743332 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-config-volume\") pod \"collect-profiles-29483340-pnjtd\" (UID: \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.745844 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/be901288-fb35-4b18-a7a6-92bebcc7ff38-cert\") pod \"ingress-canary-cvk5w\" (UID: \"be901288-fb35-4b18-a7a6-92bebcc7ff38\") " pod="openshift-ingress-canary/ingress-canary-cvk5w" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.746059 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0327ce11-e740-472b-8037-095af6cad376-metrics-tls\") pod \"dns-default-xdcw7\" (UID: \"0327ce11-e740-472b-8037-095af6cad376\") " pod="openshift-dns/dns-default-xdcw7" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.746991 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3506449d-eca0-49d4-8a8f-dc8bc347b258-serving-cert\") pod \"service-ca-operator-777779d784-f5w2t\" (UID: \"3506449d-eca0-49d4-8a8f-dc8bc347b258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.752106 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/eca25558-ed2d-42c5-bf06-b19d17fe60cf-signing-key\") pod \"service-ca-9c57cc56f-2dct6\" (UID: \"eca25558-ed2d-42c5-bf06-b19d17fe60cf\") " pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.753035 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.753080 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dzwvz\" (UID: \"de62a4d5-de79-4ad5-983d-7071fb85dce8\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.770063 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c-node-bootstrap-token\") pod \"machine-config-server-7ss5d\" (UID: \"36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c\") " pod="openshift-machine-config-operator/machine-config-server-7ss5d" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.773591 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.786552 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c-certs\") pod \"machine-config-server-7ss5d\" (UID: \"36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c\") " pod="openshift-machine-config-operator/machine-config-server-7ss5d" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.788885 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.807847 4765 request.go:700] Waited for 1.816350107s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6576b87f9c-t8tsq Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.843516 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.843856 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:57.343834687 +0000 UTC m=+158.361560519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.848832 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/95b5195e-ccd9-451a-baf8-ee70aaa0e650-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d44mx\" (UID: \"95b5195e-ccd9-451a-baf8-ee70aaa0e650\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.907583 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9ftz\" (UniqueName: \"kubernetes.io/projected/193f8517-94f7-42fe-9fe2-0bdb69cc8424-kube-api-access-v9ftz\") pod \"downloads-7954f5f757-5cllr\" (UID: \"193f8517-94f7-42fe-9fe2-0bdb69cc8424\") " pod="openshift-console/downloads-7954f5f757-5cllr" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.913277 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-bound-sa-token\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.917788 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjgsk\" (UniqueName: \"kubernetes.io/projected/9e2101c8-3b98-4b58-959a-2cda2a8d08cb-kube-api-access-vjgsk\") pod \"etcd-operator-b45778765-nkzc2\" (UID: \"9e2101c8-3b98-4b58-959a-2cda2a8d08cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.928111 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvmgm\" (UniqueName: \"kubernetes.io/projected/0be5f3b8-eeae-405b-a836-e806531a57e0-kube-api-access-fvmgm\") pod \"console-f9d7485db-l7658\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.946667 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:56 crc kubenswrapper[4765]: E0121 13:04:56.947883 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:57.447868244 +0000 UTC m=+158.465594066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.954924 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4trd\" (UniqueName: \"kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-kube-api-access-x4trd\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:56 crc kubenswrapper[4765]: I0121 13:04:56.982077 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wcb2\" (UniqueName: \"kubernetes.io/projected/95b5195e-ccd9-451a-baf8-ee70aaa0e650-kube-api-access-5wcb2\") pod \"cluster-image-registry-operator-dc59b4c8b-d44mx\" (UID: \"95b5195e-ccd9-451a-baf8-ee70aaa0e650\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.011560 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-489qb\" (UniqueName: \"kubernetes.io/projected/08434441-0009-483c-84b1-86d78ac699f4-kube-api-access-489qb\") pod \"router-default-5444994796-mww4w\" (UID: \"08434441-0009-483c-84b1-86d78ac699f4\") " pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.032602 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5bq2\" (UniqueName: \"kubernetes.io/projected/de62a4d5-de79-4ad5-983d-7071fb85dce8-kube-api-access-f5bq2\") pod \"marketplace-operator-79b997595-dzwvz\" (UID: \"de62a4d5-de79-4ad5-983d-7071fb85dce8\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.044735 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svgx7\" (UniqueName: \"kubernetes.io/projected/1bd0de6e-9060-46fd-9b0e-aac63b762b0d-kube-api-access-svgx7\") pod \"machine-config-controller-84d6567774-r5c5g\" (UID: \"1bd0de6e-9060-46fd-9b0e-aac63b762b0d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.048748 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:57 crc kubenswrapper[4765]: E0121 13:04:57.049946 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:57.549914027 +0000 UTC m=+158.567639849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.051985 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-25rmn"] Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.052085 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:57 crc kubenswrapper[4765]: E0121 13:04:57.052696 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:57.552675973 +0000 UTC m=+158.570401795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.064039 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2w7s\" (UniqueName: \"kubernetes.io/projected/6f858ebc-0551-4b6c-86e5-ab124ca2b27f-kube-api-access-z2w7s\") pod \"dns-operator-744455d44c-4zqn6\" (UID: \"6f858ebc-0551-4b6c-86e5-ab124ca2b27f\") " pod="openshift-dns-operator/dns-operator-744455d44c-4zqn6" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.069196 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" event={"ID":"b0a0a1c1-7631-4b40-8a54-268af3d95cb6","Type":"ContainerStarted","Data":"5b32ff768fd589af881c37fc29ebc85c81cd0621253068b78e85ff057851db8d"} Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.075539 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mnwzz"] Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.080687 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fq5sj" event={"ID":"48998ce4-56d3-439e-90c5-c7caa4b8344f","Type":"ContainerStarted","Data":"5f8b651db49dc3df3a8e924b9e869b747893af0217cd63385635c337a4653db7"} Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.082109 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qt4r\" (UniqueName: \"kubernetes.io/projected/50ea39eb-559e-4298-9133-4d2a5c7890cb-kube-api-access-2qt4r\") pod \"control-plane-machine-set-operator-78cbb6b69f-x4zpp\" (UID: \"50ea39eb-559e-4298-9133-4d2a5c7890cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x4zpp" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.082680 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" event={"ID":"67f85054-6343-454e-9f9f-eebadd266b08","Type":"ContainerStarted","Data":"4b32a5b267f9cebd43eeab1b9ae1a95a61bbd7157ce5816ca333401c04a9f8be"} Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.090841 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5zz49" event={"ID":"f1c56430-35ae-4e7c-9f5a-108205dbe2b3","Type":"ContainerStarted","Data":"1265034aa7194488e10b1112a2d6461d1b6c20c35d822cee4fc64c2cb6c18e9a"} Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.091582 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.094314 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.100448 4765 patch_prober.go:28] interesting pod/console-operator-58897d9998-5zz49 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.100498 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5zz49" podUID="f1c56430-35ae-4e7c-9f5a-108205dbe2b3" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.106017 4765 generic.go:334] "Generic (PLEG): container finished" podID="7cb01aaa-41aa-442c-b18c-d345abd3d3d9" containerID="ea74b81aa9c3a8a1e14cf57e768540ad7ae24a631cafd107b3a4a5f626f08e95" exitCode=0 Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.106103 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" event={"ID":"7cb01aaa-41aa-442c-b18c-d345abd3d3d9","Type":"ContainerDied","Data":"ea74b81aa9c3a8a1e14cf57e768540ad7ae24a631cafd107b3a4a5f626f08e95"} Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.107505 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdngn\" (UniqueName: \"kubernetes.io/projected/0327ce11-e740-472b-8037-095af6cad376-kube-api-access-vdngn\") pod \"dns-default-xdcw7\" (UID: \"0327ce11-e740-472b-8037-095af6cad376\") " pod="openshift-dns/dns-default-xdcw7" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.112848 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gklcm\" (UniqueName: \"kubernetes.io/projected/7c5b52bd-6cb5-4544-9c7d-b374210ae44d-kube-api-access-gklcm\") pod \"csi-hostpathplugin-9bbb7\" (UID: \"7c5b52bd-6cb5-4544-9c7d-b374210ae44d\") " pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.132629 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.137059 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m9rf\" (UniqueName: \"kubernetes.io/projected/576c7738-88c3-450e-b9c2-c291f73191b8-kube-api-access-5m9rf\") pod \"openshift-controller-manager-operator-756b6f6bc6-n9tp2\" (UID: \"576c7738-88c3-450e-b9c2-c291f73191b8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.151451 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kctjn\" (UniqueName: \"kubernetes.io/projected/eca25558-ed2d-42c5-bf06-b19d17fe60cf-kube-api-access-kctjn\") pod \"service-ca-9c57cc56f-2dct6\" (UID: \"eca25558-ed2d-42c5-bf06-b19d17fe60cf\") " pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.153619 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:57 crc kubenswrapper[4765]: E0121 13:04:57.155434 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:57.655405365 +0000 UTC m=+158.673131257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.164843 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5cllr" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.169513 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x4zpp" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.179525 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdbdj\" (UniqueName: \"kubernetes.io/projected/17f0cd0d-b1e3-42d0-abde-21e830e40e5d-kube-api-access-zdbdj\") pod \"multus-admission-controller-857f4d67dd-79kcs\" (UID: \"17f0cd0d-b1e3-42d0-abde-21e830e40e5d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-79kcs" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.183848 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.193934 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.194573 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40064ef5-d679-4224-af54-a21488bbbb11-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-57jzj\" (UID: \"40064ef5-d679-4224-af54-a21488bbbb11\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.211258 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dab7fc80-6af1-4650-9cc6-875e36327b3f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l6t7p\" (UID: \"dab7fc80-6af1-4650-9cc6-875e36327b3f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.211442 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.213235 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4zqn6" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.228036 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgn96\" (UniqueName: \"kubernetes.io/projected/3506449d-eca0-49d4-8a8f-dc8bc347b258-kube-api-access-wgn96\") pod \"service-ca-operator-777779d784-f5w2t\" (UID: \"3506449d-eca0-49d4-8a8f-dc8bc347b258\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" Jan 21 13:04:57 crc kubenswrapper[4765]: E0121 13:04:57.228172 4765 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 21 13:04:57 crc kubenswrapper[4765]: E0121 13:04:57.228256 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-client-ca podName:fc58cdb9-8e5c-426c-a193-994e3b2ce117 nodeName:}" failed. No retries permitted until 2026-01-21 13:04:58.228230853 +0000 UTC m=+159.245956675 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-client-ca") pod "controller-manager-879f6c89f-dhjpc" (UID: "fc58cdb9-8e5c-426c-a193-994e3b2ce117") : failed to sync configmap cache: timed out waiting for the condition Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.228891 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.233181 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.246923 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.247633 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.251845 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxfs8\" (UniqueName: \"kubernetes.io/projected/7abbef71-8ead-4d5e-afc9-45a1195804cd-kube-api-access-bxfs8\") pod \"packageserver-d55dfcdfc-9w7tp\" (UID: \"7abbef71-8ead-4d5e-afc9-45a1195804cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.259740 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:57 crc kubenswrapper[4765]: E0121 13:04:57.260538 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:57.760521783 +0000 UTC m=+158.778247605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.263253 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.268242 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xdcw7" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.285315 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52"] Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.322774 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.324401 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/320c7cb7-c625-492f-9cab-d9f2858c5742-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rzw49\" (UID: \"320c7cb7-c625-492f-9cab-d9f2858c5742\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.337591 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c"] Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.339263 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xfs5k"] Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.339581 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq"] Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.346036 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbpkt\" (UniqueName: \"kubernetes.io/projected/fc2d5125-b816-4500-a5f1-99e7fd676f23-kube-api-access-mbpkt\") pod \"migrator-59844c95c7-srbvc\" (UID: \"fc2d5125-b816-4500-a5f1-99e7fd676f23\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srbvc" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.348711 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzdrg\" (UniqueName: \"kubernetes.io/projected/02ed0834-ca94-42c2-a597-4f8b72d265b5-kube-api-access-hzdrg\") pod \"kube-storage-version-migrator-operator-b67b599dd-xjvvk\" (UID: \"02ed0834-ca94-42c2-a597-4f8b72d265b5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk" Jan 21 13:04:57 crc kubenswrapper[4765]: W0121 13:04:57.353577 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd572b4ba_2f55_43ef_8b71_af94f9519768.slice/crio-e741a09617ac587501903f8cdebd8423812596b9cbeed12e081551d1c1d93dac WatchSource:0}: Error finding container e741a09617ac587501903f8cdebd8423812596b9cbeed12e081551d1c1d93dac: Status 404 returned error can't find the container with id e741a09617ac587501903f8cdebd8423812596b9cbeed12e081551d1c1d93dac Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.365069 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:57 crc kubenswrapper[4765]: E0121 13:04:57.365344 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:57.865323351 +0000 UTC m=+158.883049183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.365508 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:57 crc kubenswrapper[4765]: E0121 13:04:57.366013 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:57.86599356 +0000 UTC m=+158.883719382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.368195 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2mh2\" (UniqueName: \"kubernetes.io/projected/d6e131d4-811c-416f-bbc9-e83007e9a548-kube-api-access-f2mh2\") pod \"olm-operator-6b444d44fb-rhvp6\" (UID: \"d6e131d4-811c-416f-bbc9-e83007e9a548\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.377971 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fss4k\" (UniqueName: \"kubernetes.io/projected/726a62c0-ba93-4fff-a141-09fefec9f93e-kube-api-access-fss4k\") pod \"machine-config-operator-74547568cd-mtskd\" (UID: \"726a62c0-ba93-4fff-a141-09fefec9f93e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.383283 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5q9s\" (UniqueName: \"kubernetes.io/projected/eb34fa0a-229a-4ef4-815b-93d888c19e84-kube-api-access-j5q9s\") pod \"catalog-operator-68c6474976-fp5vd\" (UID: \"eb34fa0a-229a-4ef4-815b-93d888c19e84\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.389634 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc4bm\" (UniqueName: \"kubernetes.io/projected/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-kube-api-access-fc4bm\") pod \"collect-profiles-29483340-pnjtd\" (UID: \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.394419 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srbvc" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.407708 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.409379 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/18432a06-6a3b-451d-87d6-42ca779acf9f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-hlkc2\" (UID: \"18432a06-6a3b-451d-87d6-42ca779acf9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.431328 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg9p7\" (UniqueName: \"kubernetes.io/projected/36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c-kube-api-access-jg9p7\") pod \"machine-config-server-7ss5d\" (UID: \"36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c\") " pod="openshift-machine-config-operator/machine-config-server-7ss5d" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.441977 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.446988 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-79kcs" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.461905 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.462618 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d6t9\" (UniqueName: \"kubernetes.io/projected/18432a06-6a3b-451d-87d6-42ca779acf9f-kube-api-access-8d6t9\") pod \"ingress-operator-5b745b69d9-hlkc2\" (UID: \"18432a06-6a3b-451d-87d6-42ca779acf9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.467073 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:57 crc kubenswrapper[4765]: E0121 13:04:57.467513 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:57.967495418 +0000 UTC m=+158.985221240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.469089 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-462gv\" (UniqueName: \"kubernetes.io/projected/ec425a1a-0a7b-43ad-bbcd-ae31a5176e2b-kube-api-access-462gv\") pod \"package-server-manager-789f6589d5-qxjpg\" (UID: \"ec425a1a-0a7b-43ad-bbcd-ae31a5176e2b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.501747 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.503346 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.508900 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx"] Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.509313 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.510038 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-l7658"] Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.515547 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.515618 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-475rj\" (UniqueName: \"kubernetes.io/projected/be901288-fb35-4b18-a7a6-92bebcc7ff38-kube-api-access-475rj\") pod \"ingress-canary-cvk5w\" (UID: \"be901288-fb35-4b18-a7a6-92bebcc7ff38\") " pod="openshift-ingress-canary/ingress-canary-cvk5w" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.520955 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.557687 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cvk5w" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.569678 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:57 crc kubenswrapper[4765]: E0121 13:04:57.570062 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:58.070044415 +0000 UTC m=+159.087770247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.580916 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-7ss5d" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.587413 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.617363 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" Jan 21 13:04:57 crc kubenswrapper[4765]: W0121 13:04:57.620954 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0be5f3b8_eeae_405b_a836_e806531a57e0.slice/crio-28528cae8e5c9dc2b7b2bdda46ae05bd786b633a4867bcf0315a0b81f6776b86 WatchSource:0}: Error finding container 28528cae8e5c9dc2b7b2bdda46ae05bd786b633a4867bcf0315a0b81f6776b86: Status 404 returned error can't find the container with id 28528cae8e5c9dc2b7b2bdda46ae05bd786b633a4867bcf0315a0b81f6776b86 Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.672047 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:57 crc kubenswrapper[4765]: E0121 13:04:57.672779 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:58.172755676 +0000 UTC m=+159.190481508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.704975 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x4zpp"] Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.723527 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.770325 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2dct6"] Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.774115 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:57 crc kubenswrapper[4765]: E0121 13:04:57.774441 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:58.274427938 +0000 UTC m=+159.292153760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.882132 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:57 crc kubenswrapper[4765]: E0121 13:04:57.882674 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:58.382634771 +0000 UTC m=+159.400360593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.930551 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-5zz49" podStartSLOduration=132.930510341 podStartE2EDuration="2m12.930510341s" podCreationTimestamp="2026-01-21 13:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:04:57.904571656 +0000 UTC m=+158.922297478" watchObservedRunningTime="2026-01-21 13:04:57.930510341 +0000 UTC m=+158.948236173" Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.955928 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5cllr"] Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.971939 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dzwvz"] Jan 21 13:04:57 crc kubenswrapper[4765]: I0121 13:04:57.983804 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:57 crc kubenswrapper[4765]: E0121 13:04:57.984374 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:58.484354195 +0000 UTC m=+159.502080017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.089420 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:58 crc kubenswrapper[4765]: E0121 13:04:58.090005 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:58.589977076 +0000 UTC m=+159.607702898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.120702 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9bbb7"] Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.147476 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" event={"ID":"1d2560a8-7f01-4b0b-b05a-443fc3be98d1","Type":"ContainerStarted","Data":"16dfe1d0f3605a76ddde0dce78c735a5b6dbe272949af43c6492cafb1b15a928"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.173599 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" event={"ID":"67f85054-6343-454e-9f9f-eebadd266b08","Type":"ContainerStarted","Data":"834ea72a4850ab5db7f0f3c774fe4c51d29263938fd0d2c74511508dbce2dbe7"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.178386 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" event={"ID":"eca25558-ed2d-42c5-bf06-b19d17fe60cf","Type":"ContainerStarted","Data":"ddc33ce24712746d7803ce782328249c97296db06b57dfb9e79c2e89616b4df8"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.191148 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:58 crc kubenswrapper[4765]: E0121 13:04:58.191632 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:58.691615418 +0000 UTC m=+159.709341240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.231046 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" event={"ID":"95b5195e-ccd9-451a-baf8-ee70aaa0e650","Type":"ContainerStarted","Data":"bf6147af3399c626ffb17ecc256e4d9547a4571761dd8a4d12726091b3fc42fa"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.268836 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5zz49" event={"ID":"f1c56430-35ae-4e7c-9f5a-108205dbe2b3","Type":"ContainerStarted","Data":"935b0711698e8ec55fca1481c2050da4b466a391d2c911e6f85b257effde7cac"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.288793 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" event={"ID":"7cb01aaa-41aa-442c-b18c-d345abd3d3d9","Type":"ContainerStarted","Data":"a75021753f6c2a50ec359cb99b4d4c1b0c7ebddd05354982d96ef342fe72c202"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.292172 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.292321 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-client-ca\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:58 crc kubenswrapper[4765]: E0121 13:04:58.292552 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:58.792514379 +0000 UTC m=+159.810240201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.298483 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-client-ca\") pod \"controller-manager-879f6c89f-dhjpc\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.302756 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" event={"ID":"d572b4ba-2f55-43ef-8b71-af94f9519768","Type":"ContainerStarted","Data":"e741a09617ac587501903f8cdebd8423812596b9cbeed12e081551d1c1d93dac"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.347316 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.373495 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l7658" event={"ID":"0be5f3b8-eeae-405b-a836-e806531a57e0","Type":"ContainerStarted","Data":"28528cae8e5c9dc2b7b2bdda46ae05bd786b633a4867bcf0315a0b81f6776b86"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.412027 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:58 crc kubenswrapper[4765]: E0121 13:04:58.412523 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:58.912506027 +0000 UTC m=+159.930231849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:58 crc kubenswrapper[4765]: W0121 13:04:58.397402 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36ef7ee5_bfc8_494c_ac11_4f02aa7bb86c.slice/crio-b49543d826e12b2c0591289eb3334511b922b29a7e0d1cda4629fd710df2111f WatchSource:0}: Error finding container b49543d826e12b2c0591289eb3334511b922b29a7e0d1cda4629fd710df2111f: Status 404 returned error can't find the container with id b49543d826e12b2c0591289eb3334511b922b29a7e0d1cda4629fd710df2111f Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.447295 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" event={"ID":"c35257f3-6d8a-4917-a956-3b71a0e54c23","Type":"ContainerStarted","Data":"bfa7526187918fc93d3d5dc0a58d3f77c1f571661ed96230a2f0c7304e8d848f"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.447345 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" event={"ID":"c35257f3-6d8a-4917-a956-3b71a0e54c23","Type":"ContainerStarted","Data":"e819fb7125efeee5d607d1269b7bbc5e170fcbb2dd07718a6865f5d9e75229fa"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.452725 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xdcw7"] Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.490891 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-5zz49" Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.515390 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:58 crc kubenswrapper[4765]: E0121 13:04:58.517610 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:59.017586744 +0000 UTC m=+160.035312576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.518429 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:58 crc kubenswrapper[4765]: E0121 13:04:58.520107 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:59.020091923 +0000 UTC m=+160.037817745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.541927 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" event={"ID":"238886b4-14ad-4a1c-8ba4-84b652601186","Type":"ContainerStarted","Data":"bb5f115f4b661c2284a5427a567f5537f5267c2f718eb0d4824412a2ef8131c6"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.541997 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" event={"ID":"238886b4-14ad-4a1c-8ba4-84b652601186","Type":"ContainerStarted","Data":"aee15c918dacb5ec9a381d619b9807b9adf889edf5e2390da7f958b54d39db63"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.542943 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g"] Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.596327 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52" event={"ID":"58192d0b-35de-4d58-8037-559360392628","Type":"ContainerStarted","Data":"bf12af1eed23e11496e8fd6c756bbb3e62f1dfc8867dc3e6ba0e332a9deb07bf"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.599552 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4zqn6"] Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.619481 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:58 crc kubenswrapper[4765]: E0121 13:04:58.623850 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:59.123824232 +0000 UTC m=+160.141550054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.624926 4765 generic.go:334] "Generic (PLEG): container finished" podID="b0a0a1c1-7631-4b40-8a54-268af3d95cb6" containerID="5b32ff768fd589af881c37fc29ebc85c81cd0621253068b78e85ff057851db8d" exitCode=0 Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.625023 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" event={"ID":"b0a0a1c1-7631-4b40-8a54-268af3d95cb6","Type":"ContainerDied","Data":"5b32ff768fd589af881c37fc29ebc85c81cd0621253068b78e85ff057851db8d"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.671487 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p"] Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.675373 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x4zpp" event={"ID":"50ea39eb-559e-4298-9133-4d2a5c7890cb","Type":"ContainerStarted","Data":"4213c2ccf747e43240764724fc3237363094d40c46e0591132ae5608d2f67813"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.722046 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:58 crc kubenswrapper[4765]: E0121 13:04:58.723764 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:59.223745816 +0000 UTC m=+160.241471638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.890458 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:58 crc kubenswrapper[4765]: E0121 13:04:58.900795 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:59.400757996 +0000 UTC m=+160.418483808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.901763 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj"] Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.905063 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" event={"ID":"6525c86b-8810-4639-8d16-93d25fac15a9","Type":"ContainerStarted","Data":"b11fdccfeaa59d16165d7b17dde9993f7ee695e01951d26564cd17580bfceaac"} Jan 21 13:04:58 crc kubenswrapper[4765]: I0121 13:04:58.934281 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fq5sj" event={"ID":"48998ce4-56d3-439e-90c5-c7caa4b8344f","Type":"ContainerStarted","Data":"5ec63d572746cdedd431c94b2df2cc2f98b6b6e2aec27eea80f2634cdd0d6ed6"} Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.012564 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:59 crc kubenswrapper[4765]: E0121 13:04:59.013338 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:59.513314608 +0000 UTC m=+160.531040610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.107173 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-srbvc"] Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.120771 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:59 crc kubenswrapper[4765]: E0121 13:04:59.121117 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:59.621098339 +0000 UTC m=+160.638824161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.195501 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-79kcs"] Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.229359 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:59 crc kubenswrapper[4765]: E0121 13:04:59.229817 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:59.729802156 +0000 UTC m=+160.747527978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.234283 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nkzc2"] Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.300909 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2"] Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.337867 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:59 crc kubenswrapper[4765]: E0121 13:04:59.338199 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:04:59.838179543 +0000 UTC m=+160.855905365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.353080 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49"] Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.441554 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:59 crc kubenswrapper[4765]: E0121 13:04:59.442098 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:04:59.942078347 +0000 UTC m=+160.959804169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.512640 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fq5sj" podStartSLOduration=134.512601211 podStartE2EDuration="2m14.512601211s" podCreationTimestamp="2026-01-21 13:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:04:59.484714192 +0000 UTC m=+160.502440034" watchObservedRunningTime="2026-01-21 13:04:59.512601211 +0000 UTC m=+160.530327033" Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.513625 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t"] Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.548333 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:59 crc kubenswrapper[4765]: E0121 13:04:59.548918 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:00.048901622 +0000 UTC m=+161.066627444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.571801 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-25rmn" podStartSLOduration=135.571768552 podStartE2EDuration="2m15.571768552s" podCreationTimestamp="2026-01-21 13:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:04:59.520660173 +0000 UTC m=+160.538385995" watchObservedRunningTime="2026-01-21 13:04:59.571768552 +0000 UTC m=+160.589494374" Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.652847 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:59 crc kubenswrapper[4765]: E0121 13:04:59.653184 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:00.153170846 +0000 UTC m=+161.170896668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.653454 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2"] Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.683781 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg"] Jan 21 13:04:59 crc kubenswrapper[4765]: W0121 13:04:59.743585 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod576c7738_88c3_450e_b9c2_c291f73191b8.slice/crio-ebdf924ce2e54e3639ec69a2659b4de4d50a5e1f7344237ccbbfc02bfb160384 WatchSource:0}: Error finding container ebdf924ce2e54e3639ec69a2659b4de4d50a5e1f7344237ccbbfc02bfb160384: Status 404 returned error can't find the container with id ebdf924ce2e54e3639ec69a2659b4de4d50a5e1f7344237ccbbfc02bfb160384 Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.755671 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:59 crc kubenswrapper[4765]: E0121 13:04:59.756694 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:00.256664018 +0000 UTC m=+161.274389840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.859411 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:04:59 crc kubenswrapper[4765]: E0121 13:04:59.860040 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:00.360010686 +0000 UTC m=+161.377736698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.896865 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd"] Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.926179 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cvk5w"] Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.960955 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:04:59 crc kubenswrapper[4765]: E0121 13:04:59.961429 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:00.461408351 +0000 UTC m=+161.479134173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.962805 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g" event={"ID":"1bd0de6e-9060-46fd-9b0e-aac63b762b0d","Type":"ContainerStarted","Data":"d8ee61fb0207de94b64ea7113c40830cbe212312d6a28d17c004a16b349e36ff"} Jan 21 13:04:59 crc kubenswrapper[4765]: I0121 13:04:59.991025 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l7658" event={"ID":"0be5f3b8-eeae-405b-a836-e806531a57e0","Type":"ContainerStarted","Data":"2c4ca1e01035568324e9dc53a3f1b1c8ede91e249c6f3c0726820c09063f503d"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.008940 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6"] Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.022954 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-79kcs" event={"ID":"17f0cd0d-b1e3-42d0-abde-21e830e40e5d","Type":"ContainerStarted","Data":"8527f0309e16ba956a9fbbbd152f130f77510201585518da2e48056a52ce9e4b"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.030302 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp"] Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.033441 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd"] Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.052824 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49" event={"ID":"320c7cb7-c625-492f-9cab-d9f2858c5742","Type":"ContainerStarted","Data":"f1fb31837cb1b6961a588f4cdd1fa0da3210ed05bee850a44a83aafda56de182"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.063912 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:00 crc kubenswrapper[4765]: E0121 13:05:00.064302 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:00.564283417 +0000 UTC m=+161.582009249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.067494 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" event={"ID":"67f85054-6343-454e-9f9f-eebadd266b08","Type":"ContainerStarted","Data":"7e39b65ae95b6f8b29923072b633b59aaa727807c32417247dbad80db3eead00"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.082464 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5cllr" event={"ID":"193f8517-94f7-42fe-9fe2-0bdb69cc8424","Type":"ContainerStarted","Data":"478375014bfc7c739434a152a30d124d01703feb9d57be69e072191a5d78358e"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.103365 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fq5sj" event={"ID":"48998ce4-56d3-439e-90c5-c7caa4b8344f","Type":"ContainerStarted","Data":"8fc8ec275d91be910f907913e26325a8294853b4d46311be166d2a448a387d2b"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.111622 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-92hfr" podStartSLOduration=136.111597231 podStartE2EDuration="2m16.111597231s" podCreationTimestamp="2026-01-21 13:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:00.105442742 +0000 UTC m=+161.123168564" watchObservedRunningTime="2026-01-21 13:05:00.111597231 +0000 UTC m=+161.129323053" Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.111860 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p" event={"ID":"dab7fc80-6af1-4650-9cc6-875e36327b3f","Type":"ContainerStarted","Data":"ad5919a355040f9c5da070ff169f4baaba704de52f37217b3fea408c2947bf78"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.116114 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xdcw7" event={"ID":"0327ce11-e740-472b-8037-095af6cad376","Type":"ContainerStarted","Data":"3d3b2a38c49f7038686df92d1d17077dbf7ba7f61a7746b68207c10efeff84ff"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.118101 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" event={"ID":"c35257f3-6d8a-4917-a956-3b71a0e54c23","Type":"ContainerStarted","Data":"154a936b483b4e1f070616f4bbeec6bb3e16358caea3a6755373a04ba3ebe63c"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.119714 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" event={"ID":"3506449d-eca0-49d4-8a8f-dc8bc347b258","Type":"ContainerStarted","Data":"fc6dbf6dde5f5d27bd46c35d3aae07336e5eff3337096e878fa853344b0af4b2"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.123715 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srbvc" event={"ID":"fc2d5125-b816-4500-a5f1-99e7fd676f23","Type":"ContainerStarted","Data":"13003fd8ce3a3fe1e2254031dcad21bf007308b29bc1bfd784c7391b1ee8d576"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.126269 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" event={"ID":"6525c86b-8810-4639-8d16-93d25fac15a9","Type":"ContainerStarted","Data":"2e23bc394912bb8db67faefe272b87f521c37c4cb2b09a7e6379c1b9824c921b"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.127891 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.128160 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-7ss5d" event={"ID":"36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c","Type":"ContainerStarted","Data":"b49543d826e12b2c0591289eb3334511b922b29a7e0d1cda4629fd710df2111f"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.138117 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj" event={"ID":"40064ef5-d679-4224-af54-a21488bbbb11","Type":"ContainerStarted","Data":"92d782b40342d32a03e6fa29ed9d207aa17601f6ca11c0a4dd0dec3e482cfcc2"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.138522 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-l7658" podStartSLOduration=135.138504483 podStartE2EDuration="2m15.138504483s" podCreationTimestamp="2026-01-21 13:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:00.133754062 +0000 UTC m=+161.151479884" watchObservedRunningTime="2026-01-21 13:05:00.138504483 +0000 UTC m=+161.156230305" Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.144464 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" event={"ID":"1d2560a8-7f01-4b0b-b05a-443fc3be98d1","Type":"ContainerStarted","Data":"1162d5d0f2b2a1a13f23c9a9939887f6946a8055a0eef336560bf18a8121dd48"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.146058 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" event={"ID":"9e2101c8-3b98-4b58-959a-2cda2a8d08cb","Type":"ContainerStarted","Data":"bf0df5e9b7503a1878a05502cff34defd892d2916303140b1f7660a4271a3a57"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.152157 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-mnwzz" podStartSLOduration=134.152138899 podStartE2EDuration="2m14.152138899s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:00.151055249 +0000 UTC m=+161.168781091" watchObservedRunningTime="2026-01-21 13:05:00.152138899 +0000 UTC m=+161.169864721" Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.169565 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:00 crc kubenswrapper[4765]: E0121 13:05:00.173600 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:00.67357446 +0000 UTC m=+161.691300282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.187875 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" podStartSLOduration=134.187847573 podStartE2EDuration="2m14.187847573s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:00.185078007 +0000 UTC m=+161.202803829" watchObservedRunningTime="2026-01-21 13:05:00.187847573 +0000 UTC m=+161.205573405" Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.195235 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52" event={"ID":"58192d0b-35de-4d58-8037-559360392628","Type":"ContainerStarted","Data":"c9400666f869f8b6ef96f88832ff62c1ed38874779605cba8fbf3afa02e8f8fb"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.230520 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" event={"ID":"de62a4d5-de79-4ad5-983d-7071fb85dce8","Type":"ContainerStarted","Data":"22a865e2f709acb91802cb5d7502e87abd08379ea81180462dc7a8df7f550ae1"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.249319 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk"] Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.260443 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-w6s52" podStartSLOduration=136.260417784 podStartE2EDuration="2m16.260417784s" podCreationTimestamp="2026-01-21 13:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:00.230938981 +0000 UTC m=+161.248664803" watchObservedRunningTime="2026-01-21 13:05:00.260417784 +0000 UTC m=+161.278143606" Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.275640 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:00 crc kubenswrapper[4765]: E0121 13:05:00.276727 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:00.776712893 +0000 UTC m=+161.794438715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.280309 4765 generic.go:334] "Generic (PLEG): container finished" podID="d572b4ba-2f55-43ef-8b71-af94f9519768" containerID="477e74f50bf3aad7a41f8fd423f78d83fccf55d3651cce8f99645e8436dd4b48" exitCode=0 Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.280630 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" event={"ID":"d572b4ba-2f55-43ef-8b71-af94f9519768","Type":"ContainerDied","Data":"477e74f50bf3aad7a41f8fd423f78d83fccf55d3651cce8f99645e8436dd4b48"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.288609 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" event={"ID":"7c5b52bd-6cb5-4544-9c7d-b374210ae44d","Type":"ContainerStarted","Data":"36082f6529ea0161823e662f23c6bf3310cf4bec4c1e27397691075c3675b175"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.291627 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhjpc"] Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.299629 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2" event={"ID":"576c7738-88c3-450e-b9c2-c291f73191b8","Type":"ContainerStarted","Data":"ebdf924ce2e54e3639ec69a2659b4de4d50a5e1f7344237ccbbfc02bfb160384"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.301236 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mww4w" event={"ID":"08434441-0009-483c-84b1-86d78ac699f4","Type":"ContainerStarted","Data":"b7bddb1c332279dda22fb02f903d0a5c1df43342a88cb4b6318e65f6aa1b740c"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.307825 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd"] Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.312564 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4zqn6" event={"ID":"6f858ebc-0551-4b6c-86e5-ab124ca2b27f","Type":"ContainerStarted","Data":"41eb087091dd31182ab9ec22e0c6e617542ce272dca91d8955ae34faadcb188c"} Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.531562 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:00 crc kubenswrapper[4765]: E0121 13:05:00.545391 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:01.045354758 +0000 UTC m=+162.063080590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.634293 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:00 crc kubenswrapper[4765]: E0121 13:05:00.634913 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:01.134896536 +0000 UTC m=+162.152622358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.736013 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:00 crc kubenswrapper[4765]: E0121 13:05:00.736683 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:01.236664701 +0000 UTC m=+162.254390523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.766779 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.848129 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:00 crc kubenswrapper[4765]: E0121 13:05:00.848583 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:01.348567256 +0000 UTC m=+162.366293088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.934491 4765 csr.go:261] certificate signing request csr-4s2xj is approved, waiting to be issued Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.944923 4765 csr.go:257] certificate signing request csr-4s2xj is issued Jan 21 13:05:00 crc kubenswrapper[4765]: I0121 13:05:00.948919 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:00 crc kubenswrapper[4765]: E0121 13:05:00.949487 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:01.449429876 +0000 UTC m=+162.467155728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.050526 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:01 crc kubenswrapper[4765]: E0121 13:05:01.050892 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:01.550873652 +0000 UTC m=+162.568599474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.151177 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:01 crc kubenswrapper[4765]: E0121 13:05:01.151666 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:01.65164942 +0000 UTC m=+162.669375242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.254011 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:01 crc kubenswrapper[4765]: E0121 13:05:01.254457 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:01.754443054 +0000 UTC m=+162.772168876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.338957 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" event={"ID":"95b5195e-ccd9-451a-baf8-ee70aaa0e650","Type":"ContainerStarted","Data":"5d7b84e409ab4c6fac9e647d6c0c47db5e489742c0ac06c56b93b5beabd8c1d3"} Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.340685 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" event={"ID":"726a62c0-ba93-4fff-a141-09fefec9f93e","Type":"ContainerStarted","Data":"ec7d05be2c8678f807e4bd2a6d19b5e0898726e73ffd54bddb2ccdec548d8a35"} Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.342563 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk" event={"ID":"02ed0834-ca94-42c2-a597-4f8b72d265b5","Type":"ContainerStarted","Data":"213ecb34cc55b15e01f8a3be2d4a3d9386c22fcc955d791c59375f52a0f3021c"} Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.344447 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" event={"ID":"eca25558-ed2d-42c5-bf06-b19d17fe60cf","Type":"ContainerStarted","Data":"bb57b4b2332efffd665670176a3a951cc004b89bf01a0098839f55a55a8a8570"} Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.346546 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mww4w" event={"ID":"08434441-0009-483c-84b1-86d78ac699f4","Type":"ContainerStarted","Data":"d78a3ccbb2acaa9899c46a8cd508bbaabfdd221a75a639f1a09ea15e9e64b2ea"} Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.348626 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg" event={"ID":"ec425a1a-0a7b-43ad-bbcd-ae31a5176e2b","Type":"ContainerStarted","Data":"fafef15ad97d294cdde7f3e62b945026fed5c4b6409bec44c16f33503857d3bb"} Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.350972 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cvk5w" event={"ID":"be901288-fb35-4b18-a7a6-92bebcc7ff38","Type":"ContainerStarted","Data":"d0553bf0f0863fa0f4b1c13cd2dc2a29cf103b9026712f86debfac588d298822"} Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.352272 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" event={"ID":"fc58cdb9-8e5c-426c-a193-994e3b2ce117","Type":"ContainerStarted","Data":"60922b69e1ca0878eeab7681e6c6936be5df495b6e464185ebe754ab6d62a4f0"} Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.354811 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:01 crc kubenswrapper[4765]: E0121 13:05:01.355312 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:01.855245112 +0000 UTC m=+162.872971114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.355592 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:01 crc kubenswrapper[4765]: E0121 13:05:01.355996 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:01.855958792 +0000 UTC m=+162.873684614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.357754 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x4zpp" event={"ID":"50ea39eb-559e-4298-9133-4d2a5c7890cb","Type":"ContainerStarted","Data":"e370bae35ba5aa7a9f245d475fa800bb1a110b0a9ffb4b05795b6b7de097dd29"} Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.361714 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" event={"ID":"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b","Type":"ContainerStarted","Data":"2eb5c22845c7178041f58b037686dee473de444e8e572343232af894e45b2e33"} Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.363477 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" event={"ID":"18432a06-6a3b-451d-87d6-42ca779acf9f","Type":"ContainerStarted","Data":"22c880f4a369061044c4f61208571279a2c3d6f49438d0239cf7ab0af01be622"} Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.367130 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" event={"ID":"eb34fa0a-229a-4ef4-815b-93d888c19e84","Type":"ContainerStarted","Data":"95984abc9ab9a020377a64afc0fd416a0de1254d1b0192edbb64916d006c601a"} Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.368048 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" event={"ID":"7abbef71-8ead-4d5e-afc9-45a1195804cd","Type":"ContainerStarted","Data":"a6ae5a5e7abc50ff6e011476e169cd4b8ec6015cf420fcf0741c3805b22cd903"} Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.377485 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" event={"ID":"7cb01aaa-41aa-442c-b18c-d345abd3d3d9","Type":"ContainerStarted","Data":"88c00571e497b99f8388b4269cd39b3011cb554c2ac1d9f5eb48a7a9efe7c8ad"} Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.379875 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" event={"ID":"d6e131d4-811c-416f-bbc9-e83007e9a548","Type":"ContainerStarted","Data":"36eddb6100efe9b6bb2263e8fbf2f43b8622fcbf796d403c0879ef725bea1031"} Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.405500 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" podStartSLOduration=137.405479147 podStartE2EDuration="2m17.405479147s" podCreationTimestamp="2026-01-21 13:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:01.40378607 +0000 UTC m=+162.421511912" watchObservedRunningTime="2026-01-21 13:05:01.405479147 +0000 UTC m=+162.423204969" Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.457497 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:01 crc kubenswrapper[4765]: E0121 13:05:01.459585 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:01.959528447 +0000 UTC m=+162.977254269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.559458 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:01 crc kubenswrapper[4765]: E0121 13:05:01.559872 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:02.059859303 +0000 UTC m=+163.077585125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.660974 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:01 crc kubenswrapper[4765]: E0121 13:05:01.662701 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:02.162663566 +0000 UTC m=+163.180389388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.763286 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:01 crc kubenswrapper[4765]: E0121 13:05:01.763724 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:02.263706752 +0000 UTC m=+163.281432574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.864780 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:01 crc kubenswrapper[4765]: E0121 13:05:01.865378 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:02.365358154 +0000 UTC m=+163.383083976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.947861 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 13:00:00 +0000 UTC, rotation deadline is 2026-10-15 09:38:22.634668381 +0000 UTC Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.948545 4765 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6404h33m20.686129434s for next certificate rotation Jan 21 13:05:01 crc kubenswrapper[4765]: I0121 13:05:01.967369 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:01 crc kubenswrapper[4765]: E0121 13:05:01.967728 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:02.467714485 +0000 UTC m=+163.485440307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.068835 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.069042 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:02.569017997 +0000 UTC m=+163.586743829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.069254 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.069576 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:02.569564293 +0000 UTC m=+163.587290125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.170562 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.170833 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:02.670798933 +0000 UTC m=+163.688524755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.170916 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.171350 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:02.671305577 +0000 UTC m=+163.689031399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.272252 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.272422 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:02.772398484 +0000 UTC m=+163.790124316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.272576 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.272933 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:02.772922458 +0000 UTC m=+163.790648280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.374146 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.374353 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:02.874323103 +0000 UTC m=+163.892048925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.374492 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.374857 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:02.874848378 +0000 UTC m=+163.892574200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.385391 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g" event={"ID":"1bd0de6e-9060-46fd-9b0e-aac63b762b0d","Type":"ContainerStarted","Data":"10eaf014748a1376b3438d36044d2b755330e9aa9ec290f32a336d9a97e4a91a"} Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.386809 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xdcw7" event={"ID":"0327ce11-e740-472b-8037-095af6cad376","Type":"ContainerStarted","Data":"414070f8309936330b5ca758f48df025c4791f80963a5a4b36f435f212cda6d6"} Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.388742 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" event={"ID":"b0a0a1c1-7631-4b40-8a54-268af3d95cb6","Type":"ContainerStarted","Data":"29a14f3cbbceaff6cccfe4b95d6e4989a29567a448303a8e8c990a66d94f7af7"} Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.390189 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-7ss5d" event={"ID":"36ef7ee5-bfc8-494c-ac11-4f02aa7bb86c","Type":"ContainerStarted","Data":"fe53f71045e0b86a870cfd36d84ae1edf5b696eb2a678371c0c0c024cf27c31e"} Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.391957 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p" event={"ID":"dab7fc80-6af1-4650-9cc6-875e36327b3f","Type":"ContainerStarted","Data":"3dec56c00174f5b2b921748e5ba00be199902bfe221589cfb8e93172e0e207bc"} Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.393791 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" event={"ID":"de62a4d5-de79-4ad5-983d-7071fb85dce8","Type":"ContainerStarted","Data":"46eb260759cded0c901a66b6878cac473a9c57ad591f9fb26605fa55db48b36e"} Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.395457 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj" event={"ID":"40064ef5-d679-4224-af54-a21488bbbb11","Type":"ContainerStarted","Data":"36a3966144bc529515935db77f550b4955c04c6c555dc9b283499aa5fc4b58f9"} Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.397123 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5cllr" event={"ID":"193f8517-94f7-42fe-9fe2-0bdb69cc8424","Type":"ContainerStarted","Data":"049576b4c0d86317175881979475942eda9616fe80a9378e46f0936616040e2b"} Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.475647 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.475900 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:02.975857562 +0000 UTC m=+163.993583384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.476035 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.476514 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:02.97650297 +0000 UTC m=+163.994228792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.577026 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.577225 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:03.077182685 +0000 UTC m=+164.094908497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.577346 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.579096 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:03.079054427 +0000 UTC m=+164.096780249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.679069 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.679443 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:03.179423693 +0000 UTC m=+164.197149525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.780374 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.780840 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:03.280827118 +0000 UTC m=+164.298552930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.886114 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.886668 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:03.386638905 +0000 UTC m=+164.404364727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.886980 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.887507 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:03.387489559 +0000 UTC m=+164.405215381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.993960 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.994237 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:03.494166419 +0000 UTC m=+164.511892241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:02 crc kubenswrapper[4765]: I0121 13:05:02.994743 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:02 crc kubenswrapper[4765]: E0121 13:05:02.995193 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:03.495177447 +0000 UTC m=+164.512903269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.097960 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:03 crc kubenswrapper[4765]: E0121 13:05:03.098476 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:03.598454944 +0000 UTC m=+164.616180766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.202310 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:03 crc kubenswrapper[4765]: E0121 13:05:03.202939 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:03.702924194 +0000 UTC m=+164.720650016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.235397 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.252321 4765 patch_prober.go:28] interesting pod/router-default-5444994796-mww4w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:05:03 crc kubenswrapper[4765]: [-]has-synced failed: reason withheld Jan 21 13:05:03 crc kubenswrapper[4765]: [+]process-running ok Jan 21 13:05:03 crc kubenswrapper[4765]: healthz check failed Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.252837 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mww4w" podUID="08434441-0009-483c-84b1-86d78ac699f4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.304444 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:03 crc kubenswrapper[4765]: E0121 13:05:03.304901 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:03.804882973 +0000 UTC m=+164.822608785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.418985 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:03 crc kubenswrapper[4765]: E0121 13:05:03.419452 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:03.91943144 +0000 UTC m=+164.937157262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.450611 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49" event={"ID":"320c7cb7-c625-492f-9cab-d9f2858c5742","Type":"ContainerStarted","Data":"2ed8e80f21d91750c04bcdbb085ccde33d01a0f8f4fda534ce1ee974563dc11a"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.465157 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk" event={"ID":"02ed0834-ca94-42c2-a597-4f8b72d265b5","Type":"ContainerStarted","Data":"107401f2bbb7bde49e1b819b9b36ccc451fb18abb36ba1aedf24385ef4ff1c07"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.492495 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srbvc" event={"ID":"fc2d5125-b816-4500-a5f1-99e7fd676f23","Type":"ContainerStarted","Data":"56e49c24b220377acab44c0639cfab4682887e93c972b27cb165af556119983d"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.509623 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-mww4w" podStartSLOduration=137.509595476 podStartE2EDuration="2m17.509595476s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:02.415837168 +0000 UTC m=+163.433562990" watchObservedRunningTime="2026-01-21 13:05:03.509595476 +0000 UTC m=+164.527321318" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.512990 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rzw49" podStartSLOduration=137.512969499 podStartE2EDuration="2m17.512969499s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:03.508027333 +0000 UTC m=+164.525753155" watchObservedRunningTime="2026-01-21 13:05:03.512969499 +0000 UTC m=+164.530695321" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.515899 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4zqn6" event={"ID":"6f858ebc-0551-4b6c-86e5-ab124ca2b27f","Type":"ContainerStarted","Data":"7763e111d299f5e8ff8dd17f7349c212f38f6acdc52c97a873b133b5bda12c8a"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.525687 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:03 crc kubenswrapper[4765]: E0121 13:05:03.526009 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:04.025983437 +0000 UTC m=+165.043709259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.526276 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:03 crc kubenswrapper[4765]: E0121 13:05:03.526873 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:04.026850511 +0000 UTC m=+165.044576333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.537948 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" event={"ID":"9e2101c8-3b98-4b58-959a-2cda2a8d08cb","Type":"ContainerStarted","Data":"632386d2d520623be929334a68291441fb44bc0df51e1e8f46eefbbd65ac8f72"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.563057 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cvk5w" event={"ID":"be901288-fb35-4b18-a7a6-92bebcc7ff38","Type":"ContainerStarted","Data":"8d9fff452131d47d7efc2d2e0df06c4b2a95f992716e25a699e398c48e55c7fc"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.585705 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xjvvk" podStartSLOduration=137.585687873 podStartE2EDuration="2m17.585687873s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:03.584095669 +0000 UTC m=+164.601821491" watchObservedRunningTime="2026-01-21 13:05:03.585687873 +0000 UTC m=+164.603413695" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.628231 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:03 crc kubenswrapper[4765]: E0121 13:05:03.630019 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:04.129989974 +0000 UTC m=+165.147715976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.634669 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" event={"ID":"d572b4ba-2f55-43ef-8b71-af94f9519768","Type":"ContainerStarted","Data":"d136b81b2ee08028c0db51130b77453d049882aed0293e874edf1b4e943f11d5"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.664976 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nkzc2" podStartSLOduration=137.664949928 podStartE2EDuration="2m17.664949928s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:03.662525301 +0000 UTC m=+164.680251123" watchObservedRunningTime="2026-01-21 13:05:03.664949928 +0000 UTC m=+164.682675750" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.673048 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" event={"ID":"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b","Type":"ContainerStarted","Data":"1e1de584e78b0855b3a075eca7aab4239fac9a586c44eb22c777801b59307bc5"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.697195 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" event={"ID":"18432a06-6a3b-451d-87d6-42ca779acf9f","Type":"ContainerStarted","Data":"2e71d7f2c8cf02eba9e005bc089d0b4d511c270082d8380071a637f565ac3a31"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.704929 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-cvk5w" podStartSLOduration=9.704899969 podStartE2EDuration="9.704899969s" podCreationTimestamp="2026-01-21 13:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:03.70275386 +0000 UTC m=+164.720479692" watchObservedRunningTime="2026-01-21 13:05:03.704899969 +0000 UTC m=+164.722625801" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.739992 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.747235 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" event={"ID":"eb34fa0a-229a-4ef4-815b-93d888c19e84","Type":"ContainerStarted","Data":"eb412b770185a80bd0519dd5c93f2c919838279ec158fb0228eab271a72bd198"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.751684 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.764579 4765 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-fp5vd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.764685 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" podUID="eb34fa0a-229a-4ef4-815b-93d888c19e84" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.767043 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-79kcs" event={"ID":"17f0cd0d-b1e3-42d0-abde-21e830e40e5d","Type":"ContainerStarted","Data":"a15ecd6d41e1330ff6dad2f0dd685a5373cc94795b039e9e4c3ee795e955b1bf"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.794615 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg" event={"ID":"ec425a1a-0a7b-43ad-bbcd-ae31a5176e2b","Type":"ContainerStarted","Data":"c45988e2a8284b4b50ce17b4a259784673922dc05ad8ecad05449112291b87b4"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.794663 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg" Jan 21 13:05:03 crc kubenswrapper[4765]: E0121 13:05:03.798744 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:04.298721886 +0000 UTC m=+165.316447708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.812516 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" event={"ID":"7abbef71-8ead-4d5e-afc9-45a1195804cd","Type":"ContainerStarted","Data":"79e2c4bc57d86d4d4b014aa82c5c4f1f3c33071eb8e360f35d2e8f95810c7ac6"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.814376 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.815685 4765 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9w7tp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.815771 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" podUID="7abbef71-8ead-4d5e-afc9-45a1195804cd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.853517 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:03 crc kubenswrapper[4765]: E0121 13:05:03.855573 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:04.355549572 +0000 UTC m=+165.373275394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.868041 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" event={"ID":"726a62c0-ba93-4fff-a141-09fefec9f93e","Type":"ContainerStarted","Data":"d0b769cdffd8625fc0e69f1c115a1e74cc7509e2fac7c2ef2e4490fe8432b628"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.870886 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" podStartSLOduration=139.870840383 podStartE2EDuration="2m19.870840383s" podCreationTimestamp="2026-01-21 13:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:03.782168909 +0000 UTC m=+164.799894731" watchObservedRunningTime="2026-01-21 13:05:03.870840383 +0000 UTC m=+164.888566195" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.883625 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" event={"ID":"fc58cdb9-8e5c-426c-a193-994e3b2ce117","Type":"ContainerStarted","Data":"7daf20d8f550c1dae853b4d7a1662050a7ba378e76433339532a2fe3175fdeec"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.898584 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.900986 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" event={"ID":"3506449d-eca0-49d4-8a8f-dc8bc347b258","Type":"ContainerStarted","Data":"601678165c1046fc18e69342a84d6e669bfe6869fd9e321994fc1de5c8a73c25"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.902740 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2" event={"ID":"576c7738-88c3-450e-b9c2-c291f73191b8","Type":"ContainerStarted","Data":"678e984d85b4abbc57bd508a5226625b398df44363f9cc22da59fa48a8225f07"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.916624 4765 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dhjpc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.916724 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" podUID="fc58cdb9-8e5c-426c-a193-994e3b2ce117" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.927178 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" event={"ID":"d6e131d4-811c-416f-bbc9-e83007e9a548","Type":"ContainerStarted","Data":"40f6f477d7c0e91b5aa17e50113519aa2dfd9e0c55b9558067ce7a6618b3b43f"} Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.927382 4765 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rhvp6 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.927432 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" podUID="d6e131d4-811c-416f-bbc9-e83007e9a548" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.929914 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.929950 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.929963 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.933250 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" podStartSLOduration=137.933105 podStartE2EDuration="2m17.933105s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:03.857527356 +0000 UTC m=+164.875253178" watchObservedRunningTime="2026-01-21 13:05:03.933105 +0000 UTC m=+164.950830822" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.937190 4765 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dzwvz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.937250 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" podUID="de62a4d5-de79-4ad5-983d-7071fb85dce8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.957661 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg" podStartSLOduration=137.957636116 podStartE2EDuration="2m17.957636116s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:03.95452649 +0000 UTC m=+164.972252322" watchObservedRunningTime="2026-01-21 13:05:03.957636116 +0000 UTC m=+164.975361938" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.957772 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:03 crc kubenswrapper[4765]: I0121 13:05:03.958331 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" podStartSLOduration=137.958322315 podStartE2EDuration="2m17.958322315s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:03.928013359 +0000 UTC m=+164.945739211" watchObservedRunningTime="2026-01-21 13:05:03.958322315 +0000 UTC m=+164.976048137" Jan 21 13:05:03 crc kubenswrapper[4765]: E0121 13:05:03.958689 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:04.458666044 +0000 UTC m=+165.476391866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.060164 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:04 crc kubenswrapper[4765]: E0121 13:05:04.062097 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:04.562067575 +0000 UTC m=+165.579793397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.078538 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" podStartSLOduration=138.078512668 podStartE2EDuration="2m18.078512668s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:04.003725386 +0000 UTC m=+165.021451208" watchObservedRunningTime="2026-01-21 13:05:04.078512668 +0000 UTC m=+165.096238490" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.118227 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l6t7p" podStartSLOduration=138.118192352 podStartE2EDuration="2m18.118192352s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:04.076848812 +0000 UTC m=+165.094574634" watchObservedRunningTime="2026-01-21 13:05:04.118192352 +0000 UTC m=+165.135918174" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.119333 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" podStartSLOduration=138.119327323 podStartE2EDuration="2m18.119327323s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:04.117268116 +0000 UTC m=+165.134993938" watchObservedRunningTime="2026-01-21 13:05:04.119327323 +0000 UTC m=+165.137053145" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.165552 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:04 crc kubenswrapper[4765]: E0121 13:05:04.166033 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:04.66601724 +0000 UTC m=+165.683743052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.240174 4765 patch_prober.go:28] interesting pod/router-default-5444994796-mww4w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:05:04 crc kubenswrapper[4765]: [-]has-synced failed: reason withheld Jan 21 13:05:04 crc kubenswrapper[4765]: [+]process-running ok Jan 21 13:05:04 crc kubenswrapper[4765]: healthz check failed Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.240272 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mww4w" podUID="08434441-0009-483c-84b1-86d78ac699f4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.250403 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" podStartSLOduration=138.250378475 podStartE2EDuration="2m18.250378475s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:04.247714402 +0000 UTC m=+165.265440224" watchObservedRunningTime="2026-01-21 13:05:04.250378475 +0000 UTC m=+165.268104287" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.251517 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" podStartSLOduration=138.251507367 podStartE2EDuration="2m18.251507367s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:04.208552642 +0000 UTC m=+165.226278464" watchObservedRunningTime="2026-01-21 13:05:04.251507367 +0000 UTC m=+165.269233189" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.267491 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:04 crc kubenswrapper[4765]: E0121 13:05:04.267992 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:04.76796526 +0000 UTC m=+165.785691082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.291275 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" podStartSLOduration=138.291244912 podStartE2EDuration="2m18.291244912s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:04.274901321 +0000 UTC m=+165.292627143" watchObservedRunningTime="2026-01-21 13:05:04.291244912 +0000 UTC m=+165.308970734" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.318908 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" podStartSLOduration=139.318839163 podStartE2EDuration="2m19.318839163s" podCreationTimestamp="2026-01-21 13:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:04.316508458 +0000 UTC m=+165.334234280" watchObservedRunningTime="2026-01-21 13:05:04.318839163 +0000 UTC m=+165.336564995" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.370174 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:04 crc kubenswrapper[4765]: E0121 13:05:04.370912 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:04.870894427 +0000 UTC m=+165.888620249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.372820 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f5w2t" podStartSLOduration=138.37278903 podStartE2EDuration="2m18.37278903s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:04.36263759 +0000 UTC m=+165.380363412" watchObservedRunningTime="2026-01-21 13:05:04.37278903 +0000 UTC m=+165.390514842" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.440793 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" podStartSLOduration=138.440765373 podStartE2EDuration="2m18.440765373s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:04.440386463 +0000 UTC m=+165.458112285" watchObservedRunningTime="2026-01-21 13:05:04.440765373 +0000 UTC m=+165.458491195" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.472132 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:04 crc kubenswrapper[4765]: E0121 13:05:04.472383 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:04.972335094 +0000 UTC m=+165.990060916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.472869 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:04 crc kubenswrapper[4765]: E0121 13:05:04.473428 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:04.973417283 +0000 UTC m=+165.991143105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.506458 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n9tp2" podStartSLOduration=138.506422603 podStartE2EDuration="2m18.506422603s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:04.500726806 +0000 UTC m=+165.518452648" watchObservedRunningTime="2026-01-21 13:05:04.506422603 +0000 UTC m=+165.524148425" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.568793 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-2dct6" podStartSLOduration=138.568765102 podStartE2EDuration="2m18.568765102s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:04.566581341 +0000 UTC m=+165.584307183" watchObservedRunningTime="2026-01-21 13:05:04.568765102 +0000 UTC m=+165.586490924" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.573985 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:04 crc kubenswrapper[4765]: E0121 13:05:04.574424 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:05.074389187 +0000 UTC m=+166.092114999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.646430 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d44mx" podStartSLOduration=138.646412172 podStartE2EDuration="2m18.646412172s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:04.642291819 +0000 UTC m=+165.660017641" watchObservedRunningTime="2026-01-21 13:05:04.646412172 +0000 UTC m=+165.664137994" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.676376 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:04 crc kubenswrapper[4765]: E0121 13:05:04.676815 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:05.17679991 +0000 UTC m=+166.194525732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.733498 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x4zpp" podStartSLOduration=138.733474902 podStartE2EDuration="2m18.733474902s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:04.724901606 +0000 UTC m=+165.742627428" watchObservedRunningTime="2026-01-21 13:05:04.733474902 +0000 UTC m=+165.751200724" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.770908 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.770988 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.780020 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:04 crc kubenswrapper[4765]: E0121 13:05:04.780927 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:05.280899299 +0000 UTC m=+166.298625121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.801517 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.826963 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-57jzj" podStartSLOduration=138.826940438 podStartE2EDuration="2m18.826940438s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:04.823931605 +0000 UTC m=+165.841657427" watchObservedRunningTime="2026-01-21 13:05:04.826940438 +0000 UTC m=+165.844666260" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.882204 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:04 crc kubenswrapper[4765]: E0121 13:05:04.883861 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:05.383827736 +0000 UTC m=+166.401553558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.984932 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" podStartSLOduration=140.984896002 podStartE2EDuration="2m20.984896002s" podCreationTimestamp="2026-01-21 13:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:04.970692691 +0000 UTC m=+165.988418513" watchObservedRunningTime="2026-01-21 13:05:04.984896002 +0000 UTC m=+166.002621824" Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.985028 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:04 crc kubenswrapper[4765]: E0121 13:05:04.985325 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:05.485309214 +0000 UTC m=+166.503035036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.986357 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-79kcs" event={"ID":"17f0cd0d-b1e3-42d0-abde-21e830e40e5d","Type":"ContainerStarted","Data":"8acac29689045ddfcb290ec0d81a0920e729e68228a082f9a4f755d7c5fc5b1e"} Jan 21 13:05:04 crc kubenswrapper[4765]: I0121 13:05:04.986377 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:04 crc kubenswrapper[4765]: E0121 13:05:04.987394 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:05.487385631 +0000 UTC m=+166.505111453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.015386 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" event={"ID":"7c5b52bd-6cb5-4544-9c7d-b374210ae44d","Type":"ContainerStarted","Data":"6080aeec67f5ef27aa485a8ffd714b366b0866cb06a7fbe7e0193481c0efd2d3"} Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.015453 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" event={"ID":"7c5b52bd-6cb5-4544-9c7d-b374210ae44d","Type":"ContainerStarted","Data":"7d6c0f879880bbbafb5fbaae1a81c54f18485b3188e4f4a0ff3ab9570d6a2a5c"} Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.020309 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-7ss5d" podStartSLOduration=11.020281328 podStartE2EDuration="11.020281328s" podCreationTimestamp="2026-01-21 13:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:05.019763733 +0000 UTC m=+166.037489555" watchObservedRunningTime="2026-01-21 13:05:05.020281328 +0000 UTC m=+166.038007140" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.025075 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g" event={"ID":"1bd0de6e-9060-46fd-9b0e-aac63b762b0d","Type":"ContainerStarted","Data":"6c4c41dbc1a7cfc6057f026d1fd8b41b1ca0c067b8c11eeae9324f4bcf5a2061"} Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.027576 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xdcw7" event={"ID":"0327ce11-e740-472b-8037-095af6cad376","Type":"ContainerStarted","Data":"9b96028bc4ff3943f9e9956bafea5f594ff668f63df4cc70df97894e156eebd5"} Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.028096 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xdcw7" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.029266 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srbvc" event={"ID":"fc2d5125-b816-4500-a5f1-99e7fd676f23","Type":"ContainerStarted","Data":"3b0c257dd5723857a1030527b0b0fdb8d653bbffa6e77eaca61cf834cc5f5ac2"} Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.035757 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg" event={"ID":"ec425a1a-0a7b-43ad-bbcd-ae31a5176e2b","Type":"ContainerStarted","Data":"1c546c9811506959a31e582c7218e76683a94bbdaf79c4d345d58d4811bf9985"} Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.043028 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mtskd" event={"ID":"726a62c0-ba93-4fff-a141-09fefec9f93e","Type":"ContainerStarted","Data":"f97099d29e165f73bd67e8084c6c531c8b5da24b63087373d3b27fd96d1fa6a5"} Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.070390 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-hlkc2" event={"ID":"18432a06-6a3b-451d-87d6-42ca779acf9f","Type":"ContainerStarted","Data":"165eb824bc48ab42f862ef5ee6122de501c2896cdc033d0f1ef44d6be411c8b6"} Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.088205 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.089506 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4zqn6" event={"ID":"6f858ebc-0551-4b6c-86e5-ab124ca2b27f","Type":"ContainerStarted","Data":"179dae53c48f6b8ebedd0bb2f065325ab13371b93007651f301604fda02db601"} Jan 21 13:05:05 crc kubenswrapper[4765]: E0121 13:05:05.090015 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:05.589997439 +0000 UTC m=+166.607723261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.090426 4765 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dzwvz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.090508 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" podUID="de62a4d5-de79-4ad5-983d-7071fb85dce8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.115459 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-hg5vm" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.122511 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5cllr" podStartSLOduration=140.122483745 podStartE2EDuration="2m20.122483745s" podCreationTimestamp="2026-01-21 13:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:05.120201332 +0000 UTC m=+166.137927164" watchObservedRunningTime="2026-01-21 13:05:05.122483745 +0000 UTC m=+166.140209567" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.154667 4765 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-kn5fp container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.154755 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" podUID="b0a0a1c1-7631-4b40-8a54-268af3d95cb6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.157300 4765 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-kn5fp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.157359 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" podUID="b0a0a1c1-7631-4b40-8a54-268af3d95cb6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.169938 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-r5c5g" podStartSLOduration=139.169916502 podStartE2EDuration="2m19.169916502s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:05.167773683 +0000 UTC m=+166.185499505" watchObservedRunningTime="2026-01-21 13:05:05.169916502 +0000 UTC m=+166.187642324" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.177905 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rhvp6" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.192952 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.196717 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:05:05 crc kubenswrapper[4765]: E0121 13:05:05.199476 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:05.699454827 +0000 UTC m=+166.717180649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.243407 4765 patch_prober.go:28] interesting pod/router-default-5444994796-mww4w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:05:05 crc kubenswrapper[4765]: [-]has-synced failed: reason withheld Jan 21 13:05:05 crc kubenswrapper[4765]: [+]process-running ok Jan 21 13:05:05 crc kubenswrapper[4765]: healthz check failed Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.243482 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mww4w" podUID="08434441-0009-483c-84b1-86d78ac699f4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.254526 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fp5vd" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.306065 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:05 crc kubenswrapper[4765]: E0121 13:05:05.306625 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:05.80660045 +0000 UTC m=+166.824326272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.336696 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xdcw7" podStartSLOduration=11.336677239 podStartE2EDuration="11.336677239s" podCreationTimestamp="2026-01-21 13:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:05.26632807 +0000 UTC m=+166.284053892" watchObservedRunningTime="2026-01-21 13:05:05.336677239 +0000 UTC m=+166.354403071" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.408417 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:05 crc kubenswrapper[4765]: E0121 13:05:05.408802 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:05.908784547 +0000 UTC m=+166.926510369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.419783 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-4zqn6" podStartSLOduration=139.419757929 podStartE2EDuration="2m19.419757929s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:05.340543076 +0000 UTC m=+166.358268908" watchObservedRunningTime="2026-01-21 13:05:05.419757929 +0000 UTC m=+166.437483751" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.475352 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.508579 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srbvc" podStartSLOduration=139.508548787 podStartE2EDuration="2m19.508548787s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:05.447124684 +0000 UTC m=+166.464850506" watchObservedRunningTime="2026-01-21 13:05:05.508548787 +0000 UTC m=+166.526274609" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.509043 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:05 crc kubenswrapper[4765]: E0121 13:05:05.509179 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:06.009156713 +0000 UTC m=+167.026882535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.509489 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:05 crc kubenswrapper[4765]: E0121 13:05:05.509868 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:06.009857233 +0000 UTC m=+167.027583055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.510460 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-79kcs" podStartSLOduration=139.510446229 podStartE2EDuration="2m19.510446229s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:05.508199007 +0000 UTC m=+166.525924839" watchObservedRunningTime="2026-01-21 13:05:05.510446229 +0000 UTC m=+166.528172051" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.522149 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.522240 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.611309 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:05 crc kubenswrapper[4765]: E0121 13:05:05.642620 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:06.142579981 +0000 UTC m=+167.160305803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.726517 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:05 crc kubenswrapper[4765]: E0121 13:05:05.757316 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:06.257294193 +0000 UTC m=+167.275020015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.829414 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:05 crc kubenswrapper[4765]: E0121 13:05:05.829731 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:06.32971484 +0000 UTC m=+167.347440662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:05 crc kubenswrapper[4765]: I0121 13:05:05.931153 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:05 crc kubenswrapper[4765]: E0121 13:05:05.931582 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:06.431569037 +0000 UTC m=+167.449294859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.032929 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:06 crc kubenswrapper[4765]: E0121 13:05:06.033434 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:06.533411365 +0000 UTC m=+167.551137177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.091799 4765 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9w7tp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.091876 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" podUID="7abbef71-8ead-4d5e-afc9-45a1195804cd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.099863 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" event={"ID":"7c5b52bd-6cb5-4544-9c7d-b374210ae44d","Type":"ContainerStarted","Data":"b82663b72ca93f8cec0aecc520e146ec55c3c5bc473669f52e9a7b56a4f33db8"} Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.106692 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.134491 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:06 crc kubenswrapper[4765]: E0121 13:05:06.134886 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:06.634870401 +0000 UTC m=+167.652596223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.157732 4765 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-kn5fp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.157825 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" podUID="b0a0a1c1-7631-4b40-8a54-268af3d95cb6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.160201 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.236437 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.237412 4765 patch_prober.go:28] interesting pod/router-default-5444994796-mww4w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:05:06 crc kubenswrapper[4765]: [-]has-synced failed: reason withheld Jan 21 13:05:06 crc kubenswrapper[4765]: [+]process-running ok Jan 21 13:05:06 crc kubenswrapper[4765]: healthz check failed Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.237471 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mww4w" podUID="08434441-0009-483c-84b1-86d78ac699f4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:05:06 crc kubenswrapper[4765]: E0121 13:05:06.238480 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:06.738464187 +0000 UTC m=+167.756190009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.339246 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:06 crc kubenswrapper[4765]: E0121 13:05:06.339763 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:06.839739168 +0000 UTC m=+167.857464990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.441361 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:06 crc kubenswrapper[4765]: E0121 13:05:06.441842 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:06.941822202 +0000 UTC m=+167.959548024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.543623 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:06 crc kubenswrapper[4765]: E0121 13:05:06.544019 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:07.044004079 +0000 UTC m=+168.061729901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.645173 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:06 crc kubenswrapper[4765]: E0121 13:05:06.645430 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:07.145396954 +0000 UTC m=+168.163122776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.645673 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:06 crc kubenswrapper[4765]: E0121 13:05:06.646082 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:07.146072662 +0000 UTC m=+168.163798484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.747421 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:06 crc kubenswrapper[4765]: E0121 13:05:06.747689 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:07.247655893 +0000 UTC m=+168.265381715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.747768 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:06 crc kubenswrapper[4765]: E0121 13:05:06.748177 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:07.248167057 +0000 UTC m=+168.265892879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.849375 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:06 crc kubenswrapper[4765]: E0121 13:05:06.849661 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:07.349617252 +0000 UTC m=+168.367343074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.850090 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:06 crc kubenswrapper[4765]: E0121 13:05:06.850525 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:07.350506957 +0000 UTC m=+168.368232979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.951749 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:06 crc kubenswrapper[4765]: E0121 13:05:06.952329 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:07.452304153 +0000 UTC m=+168.470029975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:06 crc kubenswrapper[4765]: I0121 13:05:06.979855 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9w7tp" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.005291 4765 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.054308 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:07 crc kubenswrapper[4765]: E0121 13:05:07.054913 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:07.55488129 +0000 UTC m=+168.572607292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.095697 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.095774 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.097573 4765 patch_prober.go:28] interesting pod/console-f9d7485db-l7658 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.097639 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-l7658" podUID="0be5f3b8-eeae-405b-a836-e806531a57e0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.109782 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" event={"ID":"7c5b52bd-6cb5-4544-9c7d-b374210ae44d","Type":"ContainerStarted","Data":"90fcca6428c9af6a85b323d673465342f8028e50612c888d797b4f239c07e478"} Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.123427 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8pg48"] Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.125051 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.127554 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-csdrp"] Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.128914 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.132005 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8pg48"] Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.135905 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.139572 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gcs5c" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.145119 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-csdrp"] Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.149418 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bvt47"] Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.151493 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.152391 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bvt47"] Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.157708 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:07 crc kubenswrapper[4765]: E0121 13:05:07.158766 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:07.658730353 +0000 UTC m=+168.676456175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.202053 4765 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cllr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.202116 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5cllr" podUID="193f8517-94f7-42fe-9fe2-0bdb69cc8424" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.204996 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5cllr" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.206118 4765 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cllr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.206226 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cllr" podUID="193f8517-94f7-42fe-9fe2-0bdb69cc8424" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.206795 4765 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cllr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.208751 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cllr" podUID="193f8517-94f7-42fe-9fe2-0bdb69cc8424" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.228178 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.239527 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.243938 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p4dt7"] Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.245826 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.247546 4765 patch_prober.go:28] interesting pod/router-default-5444994796-mww4w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:05:07 crc kubenswrapper[4765]: [-]has-synced failed: reason withheld Jan 21 13:05:07 crc kubenswrapper[4765]: [+]process-running ok Jan 21 13:05:07 crc kubenswrapper[4765]: healthz check failed Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.247667 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mww4w" podUID="08434441-0009-483c-84b1-86d78ac699f4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.262089 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.262163 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx5bm\" (UniqueName: \"kubernetes.io/projected/7db200a0-358b-415c-960e-cec8935a0435-kube-api-access-bx5bm\") pod \"community-operators-bvt47\" (UID: \"7db200a0-358b-415c-960e-cec8935a0435\") " pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.262302 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db200a0-358b-415c-960e-cec8935a0435-utilities\") pod \"community-operators-bvt47\" (UID: \"7db200a0-358b-415c-960e-cec8935a0435\") " pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.262362 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1370386e-d1d5-471c-a3cc-fcbc7649a549-utilities\") pod \"community-operators-8pg48\" (UID: \"1370386e-d1d5-471c-a3cc-fcbc7649a549\") " pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.262384 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1370386e-d1d5-471c-a3cc-fcbc7649a549-catalog-content\") pod \"community-operators-8pg48\" (UID: \"1370386e-d1d5-471c-a3cc-fcbc7649a549\") " pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.262411 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq5tl\" (UniqueName: \"kubernetes.io/projected/1370386e-d1d5-471c-a3cc-fcbc7649a549-kube-api-access-qq5tl\") pod \"community-operators-8pg48\" (UID: \"1370386e-d1d5-471c-a3cc-fcbc7649a549\") " pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.262444 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2frp\" (UniqueName: \"kubernetes.io/projected/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-kube-api-access-k2frp\") pod \"certified-operators-csdrp\" (UID: \"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85\") " pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.262491 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db200a0-358b-415c-960e-cec8935a0435-catalog-content\") pod \"community-operators-bvt47\" (UID: \"7db200a0-358b-415c-960e-cec8935a0435\") " pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.262525 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-utilities\") pod \"certified-operators-csdrp\" (UID: \"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85\") " pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.262547 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-catalog-content\") pod \"certified-operators-csdrp\" (UID: \"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85\") " pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:05:07 crc kubenswrapper[4765]: E0121 13:05:07.263884 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:07.763870691 +0000 UTC m=+168.781596503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.287683 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p4dt7"] Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.295812 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.363751 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.363799 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.364157 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c876b68-6eab-460d-983d-51514e30fbd1-utilities\") pod \"certified-operators-p4dt7\" (UID: \"0c876b68-6eab-460d-983d-51514e30fbd1\") " pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.364280 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db200a0-358b-415c-960e-cec8935a0435-utilities\") pod \"community-operators-bvt47\" (UID: \"7db200a0-358b-415c-960e-cec8935a0435\") " pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.364325 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1370386e-d1d5-471c-a3cc-fcbc7649a549-utilities\") pod \"community-operators-8pg48\" (UID: \"1370386e-d1d5-471c-a3cc-fcbc7649a549\") " pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.364349 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1370386e-d1d5-471c-a3cc-fcbc7649a549-catalog-content\") pod \"community-operators-8pg48\" (UID: \"1370386e-d1d5-471c-a3cc-fcbc7649a549\") " pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.364371 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq5tl\" (UniqueName: \"kubernetes.io/projected/1370386e-d1d5-471c-a3cc-fcbc7649a549-kube-api-access-qq5tl\") pod \"community-operators-8pg48\" (UID: \"1370386e-d1d5-471c-a3cc-fcbc7649a549\") " pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.364402 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2frp\" (UniqueName: \"kubernetes.io/projected/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-kube-api-access-k2frp\") pod \"certified-operators-csdrp\" (UID: \"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85\") " pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:05:07 crc kubenswrapper[4765]: E0121 13:05:07.364500 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:07.864457264 +0000 UTC m=+168.882183156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.364710 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db200a0-358b-415c-960e-cec8935a0435-catalog-content\") pod \"community-operators-bvt47\" (UID: \"7db200a0-358b-415c-960e-cec8935a0435\") " pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.364773 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.364815 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-utilities\") pod \"certified-operators-csdrp\" (UID: \"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85\") " pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.364852 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c876b68-6eab-460d-983d-51514e30fbd1-catalog-content\") pod \"certified-operators-p4dt7\" (UID: \"0c876b68-6eab-460d-983d-51514e30fbd1\") " pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.364897 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-catalog-content\") pod \"certified-operators-csdrp\" (UID: \"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85\") " pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.365032 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.365099 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx5bm\" (UniqueName: \"kubernetes.io/projected/7db200a0-358b-415c-960e-cec8935a0435-kube-api-access-bx5bm\") pod \"community-operators-bvt47\" (UID: \"7db200a0-358b-415c-960e-cec8935a0435\") " pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.365189 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x59rb\" (UniqueName: \"kubernetes.io/projected/0c876b68-6eab-460d-983d-51514e30fbd1-kube-api-access-x59rb\") pod \"certified-operators-p4dt7\" (UID: \"0c876b68-6eab-460d-983d-51514e30fbd1\") " pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.365947 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1370386e-d1d5-471c-a3cc-fcbc7649a549-utilities\") pod \"community-operators-8pg48\" (UID: \"1370386e-d1d5-471c-a3cc-fcbc7649a549\") " pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.366043 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1370386e-d1d5-471c-a3cc-fcbc7649a549-catalog-content\") pod \"community-operators-8pg48\" (UID: \"1370386e-d1d5-471c-a3cc-fcbc7649a549\") " pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.366073 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db200a0-358b-415c-960e-cec8935a0435-utilities\") pod \"community-operators-bvt47\" (UID: \"7db200a0-358b-415c-960e-cec8935a0435\") " pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:05:07 crc kubenswrapper[4765]: E0121 13:05:07.366491 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:07.866473629 +0000 UTC m=+168.884199451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.366883 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-utilities\") pod \"certified-operators-csdrp\" (UID: \"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85\") " pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.366921 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db200a0-358b-415c-960e-cec8935a0435-catalog-content\") pod \"community-operators-bvt47\" (UID: \"7db200a0-358b-415c-960e-cec8935a0435\") " pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.367002 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-catalog-content\") pod \"certified-operators-csdrp\" (UID: \"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85\") " pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.381942 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.382287 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.468285 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.468671 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c876b68-6eab-460d-983d-51514e30fbd1-catalog-content\") pod \"certified-operators-p4dt7\" (UID: \"0c876b68-6eab-460d-983d-51514e30fbd1\") " pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.468724 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e8024fb-cd96-4996-a1a0-79a3874ccac9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4e8024fb-cd96-4996-a1a0-79a3874ccac9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.468754 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e8024fb-cd96-4996-a1a0-79a3874ccac9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4e8024fb-cd96-4996-a1a0-79a3874ccac9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.468819 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x59rb\" (UniqueName: \"kubernetes.io/projected/0c876b68-6eab-460d-983d-51514e30fbd1-kube-api-access-x59rb\") pod \"certified-operators-p4dt7\" (UID: \"0c876b68-6eab-460d-983d-51514e30fbd1\") " pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.468854 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c876b68-6eab-460d-983d-51514e30fbd1-utilities\") pod \"certified-operators-p4dt7\" (UID: \"0c876b68-6eab-460d-983d-51514e30fbd1\") " pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.469380 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c876b68-6eab-460d-983d-51514e30fbd1-utilities\") pod \"certified-operators-p4dt7\" (UID: \"0c876b68-6eab-460d-983d-51514e30fbd1\") " pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:05:07 crc kubenswrapper[4765]: E0121 13:05:07.469435 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:07.969406737 +0000 UTC m=+168.987132559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.469864 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c876b68-6eab-460d-983d-51514e30fbd1-catalog-content\") pod \"certified-operators-p4dt7\" (UID: \"0c876b68-6eab-460d-983d-51514e30fbd1\") " pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.474589 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq5tl\" (UniqueName: \"kubernetes.io/projected/1370386e-d1d5-471c-a3cc-fcbc7649a549-kube-api-access-qq5tl\") pod \"community-operators-8pg48\" (UID: \"1370386e-d1d5-471c-a3cc-fcbc7649a549\") " pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.484136 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx5bm\" (UniqueName: \"kubernetes.io/projected/7db200a0-358b-415c-960e-cec8935a0435-kube-api-access-bx5bm\") pod \"community-operators-bvt47\" (UID: \"7db200a0-358b-415c-960e-cec8935a0435\") " pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.491243 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.500226 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2frp\" (UniqueName: \"kubernetes.io/projected/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-kube-api-access-k2frp\") pod \"certified-operators-csdrp\" (UID: \"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85\") " pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.520141 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.570010 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e8024fb-cd96-4996-a1a0-79a3874ccac9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4e8024fb-cd96-4996-a1a0-79a3874ccac9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.570048 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e8024fb-cd96-4996-a1a0-79a3874ccac9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4e8024fb-cd96-4996-a1a0-79a3874ccac9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.570077 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.570496 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e8024fb-cd96-4996-a1a0-79a3874ccac9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4e8024fb-cd96-4996-a1a0-79a3874ccac9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:05:07 crc kubenswrapper[4765]: E0121 13:05:07.570455 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:08.070442742 +0000 UTC m=+169.088168564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.606909 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x59rb\" (UniqueName: \"kubernetes.io/projected/0c876b68-6eab-460d-983d-51514e30fbd1-kube-api-access-x59rb\") pod \"certified-operators-p4dt7\" (UID: \"0c876b68-6eab-460d-983d-51514e30fbd1\") " pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.671061 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:07 crc kubenswrapper[4765]: E0121 13:05:07.672487 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:08.172421503 +0000 UTC m=+169.190147335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.704941 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e8024fb-cd96-4996-a1a0-79a3874ccac9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4e8024fb-cd96-4996-a1a0-79a3874ccac9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.705480 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.750655 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.763941 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.788172 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:07 crc kubenswrapper[4765]: E0121 13:05:07.789393 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 13:05:08.289373366 +0000 UTC m=+169.307099188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2x4pn" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.879523 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.892053 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:07 crc kubenswrapper[4765]: E0121 13:05:07.892710 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 13:05:08.392688454 +0000 UTC m=+169.410414266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.905714 4765 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T13:05:07.005323974Z","Handler":null,"Name":""} Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.988263 4765 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.988319 4765 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 13:05:07 crc kubenswrapper[4765]: I0121 13:05:07.994752 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.075199 4765 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.075265 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.081422 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" podStartSLOduration=14.081400126 podStartE2EDuration="14.081400126s" podCreationTimestamp="2026-01-21 13:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:08.071697529 +0000 UTC m=+169.089423351" watchObservedRunningTime="2026-01-21 13:05:08.081400126 +0000 UTC m=+169.099125948" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.153335 4765 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-kn5fp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.153414 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" podUID="b0a0a1c1-7631-4b40-8a54-268af3d95cb6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.155700 4765 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-kn5fp container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.155795 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" podUID="b0a0a1c1-7631-4b40-8a54-268af3d95cb6" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.243614 4765 patch_prober.go:28] interesting pod/router-default-5444994796-mww4w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 13:05:08 crc kubenswrapper[4765]: [-]has-synced failed: reason withheld Jan 21 13:05:08 crc kubenswrapper[4765]: [+]process-running ok Jan 21 13:05:08 crc kubenswrapper[4765]: healthz check failed Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.243701 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mww4w" podUID="08434441-0009-483c-84b1-86d78ac699f4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.407707 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2x4pn\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.502831 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zwg7s"] Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.520642 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.542458 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.552139 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.578403 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.579546 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwg7s"] Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.623843 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd12a18-d34b-4d96-9409-f26a13dc93f5-utilities\") pod \"redhat-marketplace-zwg7s\" (UID: \"4bd12a18-d34b-4d96-9409-f26a13dc93f5\") " pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.623954 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd12a18-d34b-4d96-9409-f26a13dc93f5-catalog-content\") pod \"redhat-marketplace-zwg7s\" (UID: \"4bd12a18-d34b-4d96-9409-f26a13dc93f5\") " pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.623989 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbgk4\" (UniqueName: \"kubernetes.io/projected/4bd12a18-d34b-4d96-9409-f26a13dc93f5-kube-api-access-qbgk4\") pod \"redhat-marketplace-zwg7s\" (UID: \"4bd12a18-d34b-4d96-9409-f26a13dc93f5\") " pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.701760 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.727060 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd12a18-d34b-4d96-9409-f26a13dc93f5-utilities\") pod \"redhat-marketplace-zwg7s\" (UID: \"4bd12a18-d34b-4d96-9409-f26a13dc93f5\") " pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.727166 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd12a18-d34b-4d96-9409-f26a13dc93f5-catalog-content\") pod \"redhat-marketplace-zwg7s\" (UID: \"4bd12a18-d34b-4d96-9409-f26a13dc93f5\") " pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.727225 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbgk4\" (UniqueName: \"kubernetes.io/projected/4bd12a18-d34b-4d96-9409-f26a13dc93f5-kube-api-access-qbgk4\") pod \"redhat-marketplace-zwg7s\" (UID: \"4bd12a18-d34b-4d96-9409-f26a13dc93f5\") " pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.728120 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd12a18-d34b-4d96-9409-f26a13dc93f5-utilities\") pod \"redhat-marketplace-zwg7s\" (UID: \"4bd12a18-d34b-4d96-9409-f26a13dc93f5\") " pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.728427 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd12a18-d34b-4d96-9409-f26a13dc93f5-catalog-content\") pod \"redhat-marketplace-zwg7s\" (UID: \"4bd12a18-d34b-4d96-9409-f26a13dc93f5\") " pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.772426 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bvt47"] Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.868197 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbgk4\" (UniqueName: \"kubernetes.io/projected/4bd12a18-d34b-4d96-9409-f26a13dc93f5-kube-api-access-qbgk4\") pod \"redhat-marketplace-zwg7s\" (UID: \"4bd12a18-d34b-4d96-9409-f26a13dc93f5\") " pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.937185 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.970546 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rrqmv"] Jan 21 13:05:08 crc kubenswrapper[4765]: I0121 13:05:08.973124 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.037037 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde1e264-573c-4186-8b9b-a0cb024d5d91-utilities\") pod \"redhat-marketplace-rrqmv\" (UID: \"bde1e264-573c-4186-8b9b-a0cb024d5d91\") " pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.037093 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde1e264-573c-4186-8b9b-a0cb024d5d91-catalog-content\") pod \"redhat-marketplace-rrqmv\" (UID: \"bde1e264-573c-4186-8b9b-a0cb024d5d91\") " pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.037145 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7whw\" (UniqueName: \"kubernetes.io/projected/bde1e264-573c-4186-8b9b-a0cb024d5d91-kube-api-access-l7whw\") pod \"redhat-marketplace-rrqmv\" (UID: \"bde1e264-573c-4186-8b9b-a0cb024d5d91\") " pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.139982 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde1e264-573c-4186-8b9b-a0cb024d5d91-utilities\") pod \"redhat-marketplace-rrqmv\" (UID: \"bde1e264-573c-4186-8b9b-a0cb024d5d91\") " pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.140053 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde1e264-573c-4186-8b9b-a0cb024d5d91-catalog-content\") pod \"redhat-marketplace-rrqmv\" (UID: \"bde1e264-573c-4186-8b9b-a0cb024d5d91\") " pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.140100 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7whw\" (UniqueName: \"kubernetes.io/projected/bde1e264-573c-4186-8b9b-a0cb024d5d91-kube-api-access-l7whw\") pod \"redhat-marketplace-rrqmv\" (UID: \"bde1e264-573c-4186-8b9b-a0cb024d5d91\") " pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.140913 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde1e264-573c-4186-8b9b-a0cb024d5d91-utilities\") pod \"redhat-marketplace-rrqmv\" (UID: \"bde1e264-573c-4186-8b9b-a0cb024d5d91\") " pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.141158 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde1e264-573c-4186-8b9b-a0cb024d5d91-catalog-content\") pod \"redhat-marketplace-rrqmv\" (UID: \"bde1e264-573c-4186-8b9b-a0cb024d5d91\") " pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.172995 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvt47" event={"ID":"7db200a0-358b-415c-960e-cec8935a0435","Type":"ContainerStarted","Data":"92e2d3dcaed6f29693f4f2cf15b2bb934ff39989e97f8cf4e49e0a2658bb018e"} Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.210690 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7whw\" (UniqueName: \"kubernetes.io/projected/bde1e264-573c-4186-8b9b-a0cb024d5d91-kube-api-access-l7whw\") pod \"redhat-marketplace-rrqmv\" (UID: \"bde1e264-573c-4186-8b9b-a0cb024d5d91\") " pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.215128 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrqmv"] Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.253507 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.259787 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-mww4w" Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.354468 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.540292 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p4dt7"] Jan 21 13:05:09 crc kubenswrapper[4765]: W0121 13:05:09.556482 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c876b68_6eab_460d_983d_51514e30fbd1.slice/crio-f2b02cd94f625346cfe28438919b588d804745fca03476599d1a8d794ab45820 WatchSource:0}: Error finding container f2b02cd94f625346cfe28438919b588d804745fca03476599d1a8d794ab45820: Status 404 returned error can't find the container with id f2b02cd94f625346cfe28438919b588d804745fca03476599d1a8d794ab45820 Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.628901 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.640296 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 13:05:09 crc kubenswrapper[4765]: I0121 13:05:09.978855 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8pg48"] Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.144016 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x7f8m"] Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.168722 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.173105 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.190497 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-csdrp"] Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.195929 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kn5fp" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.225180 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4e8024fb-cd96-4996-a1a0-79a3874ccac9","Type":"ContainerStarted","Data":"3b802022cf4780f35d9d8c35fa3c8fe0043f7a62557c924bc240adb3a1e55615"} Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.231525 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7f8m"] Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.266671 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4dt7" event={"ID":"0c876b68-6eab-460d-983d-51514e30fbd1","Type":"ContainerStarted","Data":"f2b02cd94f625346cfe28438919b588d804745fca03476599d1a8d794ab45820"} Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.297521 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pg48" event={"ID":"1370386e-d1d5-471c-a3cc-fcbc7649a549","Type":"ContainerStarted","Data":"994f13f636fbfe15eca9da132661e221e0a668b923b753219a43f613d5a39c0a"} Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.337026 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080522e6-050a-4df7-afe5-2476e455e157-utilities\") pod \"redhat-operators-x7f8m\" (UID: \"080522e6-050a-4df7-afe5-2476e455e157\") " pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.337141 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080522e6-050a-4df7-afe5-2476e455e157-catalog-content\") pod \"redhat-operators-x7f8m\" (UID: \"080522e6-050a-4df7-afe5-2476e455e157\") " pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.337184 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk22t\" (UniqueName: \"kubernetes.io/projected/080522e6-050a-4df7-afe5-2476e455e157-kube-api-access-kk22t\") pod \"redhat-operators-x7f8m\" (UID: \"080522e6-050a-4df7-afe5-2476e455e157\") " pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.370827 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2x4pn"] Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.439816 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080522e6-050a-4df7-afe5-2476e455e157-utilities\") pod \"redhat-operators-x7f8m\" (UID: \"080522e6-050a-4df7-afe5-2476e455e157\") " pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.439885 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080522e6-050a-4df7-afe5-2476e455e157-catalog-content\") pod \"redhat-operators-x7f8m\" (UID: \"080522e6-050a-4df7-afe5-2476e455e157\") " pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.439918 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk22t\" (UniqueName: \"kubernetes.io/projected/080522e6-050a-4df7-afe5-2476e455e157-kube-api-access-kk22t\") pod \"redhat-operators-x7f8m\" (UID: \"080522e6-050a-4df7-afe5-2476e455e157\") " pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.440860 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080522e6-050a-4df7-afe5-2476e455e157-catalog-content\") pod \"redhat-operators-x7f8m\" (UID: \"080522e6-050a-4df7-afe5-2476e455e157\") " pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.441113 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080522e6-050a-4df7-afe5-2476e455e157-utilities\") pod \"redhat-operators-x7f8m\" (UID: \"080522e6-050a-4df7-afe5-2476e455e157\") " pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.550983 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk22t\" (UniqueName: \"kubernetes.io/projected/080522e6-050a-4df7-afe5-2476e455e157-kube-api-access-kk22t\") pod \"redhat-operators-x7f8m\" (UID: \"080522e6-050a-4df7-afe5-2476e455e157\") " pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.555554 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.556128 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wk44t"] Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.566111 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.603515 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wk44t"] Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.669449 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e452dd-2411-4ffb-8b6a-fed70777e6fc-utilities\") pod \"redhat-operators-wk44t\" (UID: \"44e452dd-2411-4ffb-8b6a-fed70777e6fc\") " pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.669559 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e452dd-2411-4ffb-8b6a-fed70777e6fc-catalog-content\") pod \"redhat-operators-wk44t\" (UID: \"44e452dd-2411-4ffb-8b6a-fed70777e6fc\") " pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.669734 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnfh2\" (UniqueName: \"kubernetes.io/projected/44e452dd-2411-4ffb-8b6a-fed70777e6fc-kube-api-access-xnfh2\") pod \"redhat-operators-wk44t\" (UID: \"44e452dd-2411-4ffb-8b6a-fed70777e6fc\") " pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.672912 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwg7s"] Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.775427 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e452dd-2411-4ffb-8b6a-fed70777e6fc-utilities\") pod \"redhat-operators-wk44t\" (UID: \"44e452dd-2411-4ffb-8b6a-fed70777e6fc\") " pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.775483 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e452dd-2411-4ffb-8b6a-fed70777e6fc-catalog-content\") pod \"redhat-operators-wk44t\" (UID: \"44e452dd-2411-4ffb-8b6a-fed70777e6fc\") " pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.775598 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnfh2\" (UniqueName: \"kubernetes.io/projected/44e452dd-2411-4ffb-8b6a-fed70777e6fc-kube-api-access-xnfh2\") pod \"redhat-operators-wk44t\" (UID: \"44e452dd-2411-4ffb-8b6a-fed70777e6fc\") " pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.782063 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e452dd-2411-4ffb-8b6a-fed70777e6fc-utilities\") pod \"redhat-operators-wk44t\" (UID: \"44e452dd-2411-4ffb-8b6a-fed70777e6fc\") " pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.786582 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e452dd-2411-4ffb-8b6a-fed70777e6fc-catalog-content\") pod \"redhat-operators-wk44t\" (UID: \"44e452dd-2411-4ffb-8b6a-fed70777e6fc\") " pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.850073 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnfh2\" (UniqueName: \"kubernetes.io/projected/44e452dd-2411-4ffb-8b6a-fed70777e6fc-kube-api-access-xnfh2\") pod \"redhat-operators-wk44t\" (UID: \"44e452dd-2411-4ffb-8b6a-fed70777e6fc\") " pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:05:10 crc kubenswrapper[4765]: I0121 13:05:10.911055 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrqmv"] Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.016752 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:05:11 crc kubenswrapper[4765]: E0121 13:05:11.070348 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f46c9a8_ee1d_497c_92f3_d7f43ebddc85.slice/crio-36cdad04543e88b5b41f0d3bbef7aca4b092dfdc798be4da5b76436d04f6e0bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1370386e_d1d5_471c_a3cc_fcbc7649a549.slice/crio-1928ca2714dccac5c13007535548ff3f9087ff650b7ce84211d5bb9d793fad49.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1370386e_d1d5_471c_a3cc_fcbc7649a549.slice/crio-conmon-1928ca2714dccac5c13007535548ff3f9087ff650b7ce84211d5bb9d793fad49.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f46c9a8_ee1d_497c_92f3_d7f43ebddc85.slice/crio-conmon-36cdad04543e88b5b41f0d3bbef7aca4b092dfdc798be4da5b76436d04f6e0bb.scope\": RecentStats: unable to find data in memory cache]" Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.356102 4765 generic.go:334] "Generic (PLEG): container finished" podID="0c876b68-6eab-460d-983d-51514e30fbd1" containerID="5c4ae586aaf1ed88205caadbf4c1d649720102ae00810664a8643722efe0e550" exitCode=0 Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.356246 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4dt7" event={"ID":"0c876b68-6eab-460d-983d-51514e30fbd1","Type":"ContainerDied","Data":"5c4ae586aaf1ed88205caadbf4c1d649720102ae00810664a8643722efe0e550"} Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.366803 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.378384 4765 generic.go:334] "Generic (PLEG): container finished" podID="1370386e-d1d5-471c-a3cc-fcbc7649a549" containerID="1928ca2714dccac5c13007535548ff3f9087ff650b7ce84211d5bb9d793fad49" exitCode=0 Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.378475 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pg48" event={"ID":"1370386e-d1d5-471c-a3cc-fcbc7649a549","Type":"ContainerDied","Data":"1928ca2714dccac5c13007535548ff3f9087ff650b7ce84211d5bb9d793fad49"} Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.384727 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwg7s" event={"ID":"4bd12a18-d34b-4d96-9409-f26a13dc93f5","Type":"ContainerStarted","Data":"f58f06fecc04e2ea439b916e568dce5b2a60b8a56e98162348d99c791341f835"} Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.391164 4765 generic.go:334] "Generic (PLEG): container finished" podID="8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" containerID="36cdad04543e88b5b41f0d3bbef7aca4b092dfdc798be4da5b76436d04f6e0bb" exitCode=0 Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.391281 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csdrp" event={"ID":"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85","Type":"ContainerDied","Data":"36cdad04543e88b5b41f0d3bbef7aca4b092dfdc798be4da5b76436d04f6e0bb"} Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.391315 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csdrp" event={"ID":"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85","Type":"ContainerStarted","Data":"718233af53a0851c3b705ead2857d2d0d2267474d9660336e2a3ffe67275c20a"} Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.407850 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrqmv" event={"ID":"bde1e264-573c-4186-8b9b-a0cb024d5d91","Type":"ContainerStarted","Data":"b3df9c75120af2ae1fa720405e149952d6a5850229823bbe20bfc4e8db71067d"} Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.438442 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" event={"ID":"5d4723c5-1628-4481-83b8-498fd4e5362e","Type":"ContainerStarted","Data":"46e1eca122573a704800301b2bbd932930edc1f239cdab9146563d896a2f94d4"} Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.438492 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" event={"ID":"5d4723c5-1628-4481-83b8-498fd4e5362e","Type":"ContainerStarted","Data":"956cb331c5054b32750caf488e622a8ea1c21600d9675dba8a7acaf574c434ca"} Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.439290 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.488031 4765 generic.go:334] "Generic (PLEG): container finished" podID="7db200a0-358b-415c-960e-cec8935a0435" containerID="5ca86b1827e1bc08a3f8fd97c282aaedff58f1ebf4c57c250ba6a3b2533d6f80" exitCode=0 Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.488445 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvt47" event={"ID":"7db200a0-358b-415c-960e-cec8935a0435","Type":"ContainerDied","Data":"5ca86b1827e1bc08a3f8fd97c282aaedff58f1ebf4c57c250ba6a3b2533d6f80"} Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.534574 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4e8024fb-cd96-4996-a1a0-79a3874ccac9","Type":"ContainerStarted","Data":"768858651fc922defaed02e1a388b86933289c5334a6ba292edd04c0dc8d5a04"} Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.598721 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.598704839 podStartE2EDuration="4.598704839s" podCreationTimestamp="2026-01-21 13:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:11.596583291 +0000 UTC m=+172.614309113" watchObservedRunningTime="2026-01-21 13:05:11.598704839 +0000 UTC m=+172.616430651" Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.638669 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" podStartSLOduration=145.63864912 podStartE2EDuration="2m25.63864912s" podCreationTimestamp="2026-01-21 13:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:11.636776169 +0000 UTC m=+172.654502001" watchObservedRunningTime="2026-01-21 13:05:11.63864912 +0000 UTC m=+172.656374942" Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.756436 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7f8m"] Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.986884 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.988150 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.997816 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 13:05:11 crc kubenswrapper[4765]: I0121 13:05:11.997938 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.032299 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.122836 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf1b067d-7c45-4d84-ab62-ef3d06385352-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bf1b067d-7c45-4d84-ab62-ef3d06385352\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.122948 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf1b067d-7c45-4d84-ab62-ef3d06385352-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bf1b067d-7c45-4d84-ab62-ef3d06385352\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.224967 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf1b067d-7c45-4d84-ab62-ef3d06385352-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bf1b067d-7c45-4d84-ab62-ef3d06385352\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.225034 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf1b067d-7c45-4d84-ab62-ef3d06385352-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bf1b067d-7c45-4d84-ab62-ef3d06385352\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.225147 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf1b067d-7c45-4d84-ab62-ef3d06385352-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bf1b067d-7c45-4d84-ab62-ef3d06385352\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.273048 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf1b067d-7c45-4d84-ab62-ef3d06385352-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bf1b067d-7c45-4d84-ab62-ef3d06385352\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.284705 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xdcw7" Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.304336 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wk44t"] Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.307301 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.435080 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs\") pod \"network-metrics-daemon-4t7jw\" (UID: \"d8dea79f-de5c-4034-9742-c322b723a59c\") " pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.446621 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d8dea79f-de5c-4034-9742-c322b723a59c-metrics-certs\") pod \"network-metrics-daemon-4t7jw\" (UID: \"d8dea79f-de5c-4034-9742-c322b723a59c\") " pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.610457 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7f8m" event={"ID":"080522e6-050a-4df7-afe5-2476e455e157","Type":"ContainerStarted","Data":"54b404a2c40714d1c65b92f721cfaf12fb632037ebd9b289fdcc3461b6544781"} Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.611032 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7f8m" event={"ID":"080522e6-050a-4df7-afe5-2476e455e157","Type":"ContainerStarted","Data":"9846c5992be783b7237249c484001d2a9c0df0b331d146c156b6728c8fec034e"} Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.642559 4765 generic.go:334] "Generic (PLEG): container finished" podID="bde1e264-573c-4186-8b9b-a0cb024d5d91" containerID="21bed4f1353abc190dd8f3f20f5667a8203657de345cb8c5cfb77ba69a812f88" exitCode=0 Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.642961 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrqmv" event={"ID":"bde1e264-573c-4186-8b9b-a0cb024d5d91","Type":"ContainerDied","Data":"21bed4f1353abc190dd8f3f20f5667a8203657de345cb8c5cfb77ba69a812f88"} Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.648742 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t7jw" Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.725033 4765 generic.go:334] "Generic (PLEG): container finished" podID="4e8024fb-cd96-4996-a1a0-79a3874ccac9" containerID="768858651fc922defaed02e1a388b86933289c5334a6ba292edd04c0dc8d5a04" exitCode=0 Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.725125 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4e8024fb-cd96-4996-a1a0-79a3874ccac9","Type":"ContainerDied","Data":"768858651fc922defaed02e1a388b86933289c5334a6ba292edd04c0dc8d5a04"} Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.757791 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wk44t" event={"ID":"44e452dd-2411-4ffb-8b6a-fed70777e6fc","Type":"ContainerStarted","Data":"cf3b8d95df9333d6c620f0d525f8a2f45d16642ce7034f5f2ff89631ae5f06fe"} Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.779126 4765 generic.go:334] "Generic (PLEG): container finished" podID="4bd12a18-d34b-4d96-9409-f26a13dc93f5" containerID="d3cdb17f10189e446a27b2cf60eb416f848be038896bbde6a7355b133aeba8a6" exitCode=0 Jan 21 13:05:12 crc kubenswrapper[4765]: I0121 13:05:12.779917 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwg7s" event={"ID":"4bd12a18-d34b-4d96-9409-f26a13dc93f5","Type":"ContainerDied","Data":"d3cdb17f10189e446a27b2cf60eb416f848be038896bbde6a7355b133aeba8a6"} Jan 21 13:05:13 crc kubenswrapper[4765]: I0121 13:05:13.032082 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 13:05:13 crc kubenswrapper[4765]: I0121 13:05:13.175834 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4t7jw"] Jan 21 13:05:13 crc kubenswrapper[4765]: W0121 13:05:13.190362 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8dea79f_de5c_4034_9742_c322b723a59c.slice/crio-eb85f2a27111e2cf6c549e279225c1d4e17a1522459353a4e2a87002abafadc7 WatchSource:0}: Error finding container eb85f2a27111e2cf6c549e279225c1d4e17a1522459353a4e2a87002abafadc7: Status 404 returned error can't find the container with id eb85f2a27111e2cf6c549e279225c1d4e17a1522459353a4e2a87002abafadc7 Jan 21 13:05:13 crc kubenswrapper[4765]: I0121 13:05:13.787886 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bf1b067d-7c45-4d84-ab62-ef3d06385352","Type":"ContainerStarted","Data":"2da4bba93e142f19a1c1caaab6bd1c539b788c2595783b2b50a11fc783ad393a"} Jan 21 13:05:13 crc kubenswrapper[4765]: I0121 13:05:13.790046 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" event={"ID":"d8dea79f-de5c-4034-9742-c322b723a59c","Type":"ContainerStarted","Data":"eb85f2a27111e2cf6c549e279225c1d4e17a1522459353a4e2a87002abafadc7"} Jan 21 13:05:13 crc kubenswrapper[4765]: I0121 13:05:13.791798 4765 generic.go:334] "Generic (PLEG): container finished" podID="080522e6-050a-4df7-afe5-2476e455e157" containerID="54b404a2c40714d1c65b92f721cfaf12fb632037ebd9b289fdcc3461b6544781" exitCode=0 Jan 21 13:05:13 crc kubenswrapper[4765]: I0121 13:05:13.791860 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7f8m" event={"ID":"080522e6-050a-4df7-afe5-2476e455e157","Type":"ContainerDied","Data":"54b404a2c40714d1c65b92f721cfaf12fb632037ebd9b289fdcc3461b6544781"} Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.188061 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.306045 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e8024fb-cd96-4996-a1a0-79a3874ccac9-kubelet-dir\") pod \"4e8024fb-cd96-4996-a1a0-79a3874ccac9\" (UID: \"4e8024fb-cd96-4996-a1a0-79a3874ccac9\") " Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.306153 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e8024fb-cd96-4996-a1a0-79a3874ccac9-kube-api-access\") pod \"4e8024fb-cd96-4996-a1a0-79a3874ccac9\" (UID: \"4e8024fb-cd96-4996-a1a0-79a3874ccac9\") " Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.306322 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e8024fb-cd96-4996-a1a0-79a3874ccac9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4e8024fb-cd96-4996-a1a0-79a3874ccac9" (UID: "4e8024fb-cd96-4996-a1a0-79a3874ccac9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.306442 4765 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e8024fb-cd96-4996-a1a0-79a3874ccac9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.319522 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e8024fb-cd96-4996-a1a0-79a3874ccac9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4e8024fb-cd96-4996-a1a0-79a3874ccac9" (UID: "4e8024fb-cd96-4996-a1a0-79a3874ccac9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.407721 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e8024fb-cd96-4996-a1a0-79a3874ccac9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.446320 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.446394 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.802937 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4e8024fb-cd96-4996-a1a0-79a3874ccac9","Type":"ContainerDied","Data":"3b802022cf4780f35d9d8c35fa3c8fe0043f7a62557c924bc240adb3a1e55615"} Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.803572 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b802022cf4780f35d9d8c35fa3c8fe0043f7a62557c924bc240adb3a1e55615" Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.803775 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.810038 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bf1b067d-7c45-4d84-ab62-ef3d06385352","Type":"ContainerStarted","Data":"e6416d8f7c580d59aa81ec3c4bc95623cde6c05d1534807caf4e7d9aaf4c5e05"} Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.814767 4765 generic.go:334] "Generic (PLEG): container finished" podID="44e452dd-2411-4ffb-8b6a-fed70777e6fc" containerID="69602753f4342fe98d31e75cce43392073ac15e2f391ab1be38803456d196019" exitCode=0 Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.814823 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wk44t" event={"ID":"44e452dd-2411-4ffb-8b6a-fed70777e6fc","Type":"ContainerDied","Data":"69602753f4342fe98d31e75cce43392073ac15e2f391ab1be38803456d196019"} Jan 21 13:05:14 crc kubenswrapper[4765]: I0121 13:05:14.843432 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.843407538 podStartE2EDuration="3.843407538s" podCreationTimestamp="2026-01-21 13:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:14.834811971 +0000 UTC m=+175.852537793" watchObservedRunningTime="2026-01-21 13:05:14.843407538 +0000 UTC m=+175.861133360" Jan 21 13:05:15 crc kubenswrapper[4765]: I0121 13:05:15.838791 4765 generic.go:334] "Generic (PLEG): container finished" podID="bf1b067d-7c45-4d84-ab62-ef3d06385352" containerID="e6416d8f7c580d59aa81ec3c4bc95623cde6c05d1534807caf4e7d9aaf4c5e05" exitCode=0 Jan 21 13:05:15 crc kubenswrapper[4765]: I0121 13:05:15.838842 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bf1b067d-7c45-4d84-ab62-ef3d06385352","Type":"ContainerDied","Data":"e6416d8f7c580d59aa81ec3c4bc95623cde6c05d1534807caf4e7d9aaf4c5e05"} Jan 21 13:05:15 crc kubenswrapper[4765]: I0121 13:05:15.846012 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" event={"ID":"d8dea79f-de5c-4034-9742-c322b723a59c","Type":"ContainerStarted","Data":"ca70be6673037ae2639f2d602af4433eb9423c5cbec5f69aea77d80d0175e17e"} Jan 21 13:05:16 crc kubenswrapper[4765]: I0121 13:05:16.864203 4765 generic.go:334] "Generic (PLEG): container finished" podID="561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b" containerID="1e1de584e78b0855b3a075eca7aab4239fac9a586c44eb22c777801b59307bc5" exitCode=0 Jan 21 13:05:16 crc kubenswrapper[4765]: I0121 13:05:16.864239 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" event={"ID":"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b","Type":"ContainerDied","Data":"1e1de584e78b0855b3a075eca7aab4239fac9a586c44eb22c777801b59307bc5"} Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.166459 4765 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cllr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.166571 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5cllr" podUID="193f8517-94f7-42fe-9fe2-0bdb69cc8424" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.166477 4765 patch_prober.go:28] interesting pod/downloads-7954f5f757-5cllr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.166662 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5cllr" podUID="193f8517-94f7-42fe-9fe2-0bdb69cc8424" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.216943 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.230450 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.294367 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.364506 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf1b067d-7c45-4d84-ab62-ef3d06385352-kubelet-dir\") pod \"bf1b067d-7c45-4d84-ab62-ef3d06385352\" (UID: \"bf1b067d-7c45-4d84-ab62-ef3d06385352\") " Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.364704 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf1b067d-7c45-4d84-ab62-ef3d06385352-kube-api-access\") pod \"bf1b067d-7c45-4d84-ab62-ef3d06385352\" (UID: \"bf1b067d-7c45-4d84-ab62-ef3d06385352\") " Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.365414 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf1b067d-7c45-4d84-ab62-ef3d06385352-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bf1b067d-7c45-4d84-ab62-ef3d06385352" (UID: "bf1b067d-7c45-4d84-ab62-ef3d06385352"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.375184 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1b067d-7c45-4d84-ab62-ef3d06385352-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bf1b067d-7c45-4d84-ab62-ef3d06385352" (UID: "bf1b067d-7c45-4d84-ab62-ef3d06385352"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.467176 4765 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf1b067d-7c45-4d84-ab62-ef3d06385352-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.467363 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf1b067d-7c45-4d84-ab62-ef3d06385352-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.907100 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.907101 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bf1b067d-7c45-4d84-ab62-ef3d06385352","Type":"ContainerDied","Data":"2da4bba93e142f19a1c1caaab6bd1c539b788c2595783b2b50a11fc783ad393a"} Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.909026 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2da4bba93e142f19a1c1caaab6bd1c539b788c2595783b2b50a11fc783ad393a" Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.928368 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4t7jw" event={"ID":"d8dea79f-de5c-4034-9742-c322b723a59c","Type":"ContainerStarted","Data":"dddc4b826faf4663a9c94048b27998dfdcffe180814b53dd419abec1f23d06c5"} Jan 21 13:05:17 crc kubenswrapper[4765]: I0121 13:05:17.957816 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-4t7jw" podStartSLOduration=152.957794695 podStartE2EDuration="2m32.957794695s" podCreationTimestamp="2026-01-21 13:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:17.955614825 +0000 UTC m=+178.973340647" watchObservedRunningTime="2026-01-21 13:05:17.957794695 +0000 UTC m=+178.975520517" Jan 21 13:05:27 crc kubenswrapper[4765]: I0121 13:05:27.182902 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5cllr" Jan 21 13:05:28 crc kubenswrapper[4765]: I0121 13:05:28.585635 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:05:37 crc kubenswrapper[4765]: I0121 13:05:37.728778 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qxjpg" Jan 21 13:05:43 crc kubenswrapper[4765]: I0121 13:05:43.908259 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" Jan 21 13:05:43 crc kubenswrapper[4765]: I0121 13:05:43.986354 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc4bm\" (UniqueName: \"kubernetes.io/projected/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-kube-api-access-fc4bm\") pod \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\" (UID: \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\") " Jan 21 13:05:43 crc kubenswrapper[4765]: I0121 13:05:43.986401 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-config-volume\") pod \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\" (UID: \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\") " Jan 21 13:05:43 crc kubenswrapper[4765]: I0121 13:05:43.986425 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-secret-volume\") pod \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\" (UID: \"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b\") " Jan 21 13:05:43 crc kubenswrapper[4765]: I0121 13:05:43.987626 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-config-volume" (OuterVolumeSpecName: "config-volume") pod "561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b" (UID: "561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:05:44 crc kubenswrapper[4765]: I0121 13:05:44.005956 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-kube-api-access-fc4bm" (OuterVolumeSpecName: "kube-api-access-fc4bm") pod "561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b" (UID: "561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b"). InnerVolumeSpecName "kube-api-access-fc4bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:05:44 crc kubenswrapper[4765]: I0121 13:05:44.006006 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b" (UID: "561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:05:44 crc kubenswrapper[4765]: I0121 13:05:44.087681 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc4bm\" (UniqueName: \"kubernetes.io/projected/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-kube-api-access-fc4bm\") on node \"crc\" DevicePath \"\"" Jan 21 13:05:44 crc kubenswrapper[4765]: I0121 13:05:44.087714 4765 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:05:44 crc kubenswrapper[4765]: I0121 13:05:44.087722 4765 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:05:44 crc kubenswrapper[4765]: I0121 13:05:44.164961 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" event={"ID":"561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b","Type":"ContainerDied","Data":"2eb5c22845c7178041f58b037686dee473de444e8e572343232af894e45b2e33"} Jan 21 13:05:44 crc kubenswrapper[4765]: I0121 13:05:44.165016 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eb5c22845c7178041f58b037686dee473de444e8e572343232af894e45b2e33" Jan 21 13:05:44 crc kubenswrapper[4765]: I0121 13:05:44.165043 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd" Jan 21 13:05:44 crc kubenswrapper[4765]: I0121 13:05:44.445878 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:05:44 crc kubenswrapper[4765]: I0121 13:05:44.445994 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:05:48 crc kubenswrapper[4765]: E0121 13:05:48.735662 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 13:05:48 crc kubenswrapper[4765]: E0121 13:05:48.735914 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bx5bm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bvt47_openshift-marketplace(7db200a0-358b-415c-960e-cec8935a0435): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:05:48 crc kubenswrapper[4765]: E0121 13:05:48.737805 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bvt47" podUID="7db200a0-358b-415c-960e-cec8935a0435" Jan 21 13:05:48 crc kubenswrapper[4765]: E0121 13:05:48.757107 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 13:05:48 crc kubenswrapper[4765]: E0121 13:05:48.757434 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qq5tl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8pg48_openshift-marketplace(1370386e-d1d5-471c-a3cc-fcbc7649a549): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:05:48 crc kubenswrapper[4765]: E0121 13:05:48.758597 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8pg48" podUID="1370386e-d1d5-471c-a3cc-fcbc7649a549" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.513359 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 13:05:49 crc kubenswrapper[4765]: E0121 13:05:49.514004 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1b067d-7c45-4d84-ab62-ef3d06385352" containerName="pruner" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.514020 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1b067d-7c45-4d84-ab62-ef3d06385352" containerName="pruner" Jan 21 13:05:49 crc kubenswrapper[4765]: E0121 13:05:49.514043 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b" containerName="collect-profiles" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.514049 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b" containerName="collect-profiles" Jan 21 13:05:49 crc kubenswrapper[4765]: E0121 13:05:49.514060 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e8024fb-cd96-4996-a1a0-79a3874ccac9" containerName="pruner" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.514066 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e8024fb-cd96-4996-a1a0-79a3874ccac9" containerName="pruner" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.514149 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1b067d-7c45-4d84-ab62-ef3d06385352" containerName="pruner" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.514163 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b" containerName="collect-profiles" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.514176 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e8024fb-cd96-4996-a1a0-79a3874ccac9" containerName="pruner" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.514575 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.516267 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.636196 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.638074 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.742061 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5effab5-52b4-4fb7-bbe7-071784ce84d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f5effab5-52b4-4fb7-bbe7-071784ce84d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.742149 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5effab5-52b4-4fb7-bbe7-071784ce84d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f5effab5-52b4-4fb7-bbe7-071784ce84d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.843502 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5effab5-52b4-4fb7-bbe7-071784ce84d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f5effab5-52b4-4fb7-bbe7-071784ce84d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.843655 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5effab5-52b4-4fb7-bbe7-071784ce84d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f5effab5-52b4-4fb7-bbe7-071784ce84d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.843940 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5effab5-52b4-4fb7-bbe7-071784ce84d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f5effab5-52b4-4fb7-bbe7-071784ce84d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.864989 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5effab5-52b4-4fb7-bbe7-071784ce84d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f5effab5-52b4-4fb7-bbe7-071784ce84d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:05:49 crc kubenswrapper[4765]: I0121 13:05:49.980535 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:05:50 crc kubenswrapper[4765]: E0121 13:05:50.786497 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bvt47" podUID="7db200a0-358b-415c-960e-cec8935a0435" Jan 21 13:05:50 crc kubenswrapper[4765]: E0121 13:05:50.786871 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8pg48" podUID="1370386e-d1d5-471c-a3cc-fcbc7649a549" Jan 21 13:05:50 crc kubenswrapper[4765]: E0121 13:05:50.897717 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 13:05:50 crc kubenswrapper[4765]: E0121 13:05:50.898644 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k2frp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-csdrp_openshift-marketplace(8f46c9a8-ee1d-497c-92f3-d7f43ebddc85): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:05:50 crc kubenswrapper[4765]: E0121 13:05:50.900187 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-csdrp" podUID="8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" Jan 21 13:05:50 crc kubenswrapper[4765]: E0121 13:05:50.914419 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 13:05:50 crc kubenswrapper[4765]: E0121 13:05:50.914667 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x59rb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-p4dt7_openshift-marketplace(0c876b68-6eab-460d-983d-51514e30fbd1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:05:50 crc kubenswrapper[4765]: E0121 13:05:50.916725 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-p4dt7" podUID="0c876b68-6eab-460d-983d-51514e30fbd1" Jan 21 13:05:54 crc kubenswrapper[4765]: I0121 13:05:54.112807 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 13:05:54 crc kubenswrapper[4765]: I0121 13:05:54.114122 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:05:54 crc kubenswrapper[4765]: I0121 13:05:54.118557 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 13:05:54 crc kubenswrapper[4765]: I0121 13:05:54.212458 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32a0b174-c516-4ed9-9204-e1f15dd18d59-kube-api-access\") pod \"installer-9-crc\" (UID: \"32a0b174-c516-4ed9-9204-e1f15dd18d59\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:05:54 crc kubenswrapper[4765]: I0121 13:05:54.212523 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32a0b174-c516-4ed9-9204-e1f15dd18d59-var-lock\") pod \"installer-9-crc\" (UID: \"32a0b174-c516-4ed9-9204-e1f15dd18d59\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:05:54 crc kubenswrapper[4765]: I0121 13:05:54.212598 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32a0b174-c516-4ed9-9204-e1f15dd18d59-kubelet-dir\") pod \"installer-9-crc\" (UID: \"32a0b174-c516-4ed9-9204-e1f15dd18d59\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:05:54 crc kubenswrapper[4765]: I0121 13:05:54.314122 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32a0b174-c516-4ed9-9204-e1f15dd18d59-kube-api-access\") pod \"installer-9-crc\" (UID: \"32a0b174-c516-4ed9-9204-e1f15dd18d59\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:05:54 crc kubenswrapper[4765]: I0121 13:05:54.314440 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32a0b174-c516-4ed9-9204-e1f15dd18d59-var-lock\") pod \"installer-9-crc\" (UID: \"32a0b174-c516-4ed9-9204-e1f15dd18d59\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:05:54 crc kubenswrapper[4765]: I0121 13:05:54.314520 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32a0b174-c516-4ed9-9204-e1f15dd18d59-kubelet-dir\") pod \"installer-9-crc\" (UID: \"32a0b174-c516-4ed9-9204-e1f15dd18d59\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:05:54 crc kubenswrapper[4765]: I0121 13:05:54.314615 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32a0b174-c516-4ed9-9204-e1f15dd18d59-var-lock\") pod \"installer-9-crc\" (UID: \"32a0b174-c516-4ed9-9204-e1f15dd18d59\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:05:54 crc kubenswrapper[4765]: I0121 13:05:54.314642 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32a0b174-c516-4ed9-9204-e1f15dd18d59-kubelet-dir\") pod \"installer-9-crc\" (UID: \"32a0b174-c516-4ed9-9204-e1f15dd18d59\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:05:54 crc kubenswrapper[4765]: I0121 13:05:54.338067 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32a0b174-c516-4ed9-9204-e1f15dd18d59-kube-api-access\") pod \"installer-9-crc\" (UID: \"32a0b174-c516-4ed9-9204-e1f15dd18d59\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:05:54 crc kubenswrapper[4765]: I0121 13:05:54.443451 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:05:55 crc kubenswrapper[4765]: E0121 13:05:55.326901 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-csdrp" podUID="8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" Jan 21 13:05:55 crc kubenswrapper[4765]: E0121 13:05:55.326920 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-p4dt7" podUID="0c876b68-6eab-460d-983d-51514e30fbd1" Jan 21 13:05:55 crc kubenswrapper[4765]: E0121 13:05:55.357126 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 13:05:55 crc kubenswrapper[4765]: E0121 13:05:55.357415 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kk22t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-x7f8m_openshift-marketplace(080522e6-050a-4df7-afe5-2476e455e157): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:05:55 crc kubenswrapper[4765]: E0121 13:05:55.358821 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-x7f8m" podUID="080522e6-050a-4df7-afe5-2476e455e157" Jan 21 13:05:56 crc kubenswrapper[4765]: E0121 13:05:56.765449 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-x7f8m" podUID="080522e6-050a-4df7-afe5-2476e455e157" Jan 21 13:05:56 crc kubenswrapper[4765]: E0121 13:05:56.785043 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 13:05:56 crc kubenswrapper[4765]: E0121 13:05:56.785468 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbgk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-zwg7s_openshift-marketplace(4bd12a18-d34b-4d96-9409-f26a13dc93f5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:05:56 crc kubenswrapper[4765]: E0121 13:05:56.786725 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-zwg7s" podUID="4bd12a18-d34b-4d96-9409-f26a13dc93f5" Jan 21 13:05:57 crc kubenswrapper[4765]: I0121 13:05:57.014095 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 13:05:57 crc kubenswrapper[4765]: I0121 13:05:57.245064 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wk44t" event={"ID":"44e452dd-2411-4ffb-8b6a-fed70777e6fc","Type":"ContainerStarted","Data":"fb65363089a665ec044a3587a898ad9a012bac66bc50fcb7d89fd52e3e8c49c1"} Jan 21 13:05:57 crc kubenswrapper[4765]: I0121 13:05:57.255634 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrqmv" event={"ID":"bde1e264-573c-4186-8b9b-a0cb024d5d91","Type":"ContainerStarted","Data":"2e40057a6fbccea2944492f59aa38edca0e4ff0c227bb4edee1b57c7373222e2"} Jan 21 13:05:57 crc kubenswrapper[4765]: I0121 13:05:57.258893 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"32a0b174-c516-4ed9-9204-e1f15dd18d59","Type":"ContainerStarted","Data":"69a531db6890734e3191bcde1e91fd789d8061bdde28677c690be81a77d8af27"} Jan 21 13:05:57 crc kubenswrapper[4765]: E0121 13:05:57.263615 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zwg7s" podUID="4bd12a18-d34b-4d96-9409-f26a13dc93f5" Jan 21 13:05:57 crc kubenswrapper[4765]: I0121 13:05:57.303294 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 13:05:58 crc kubenswrapper[4765]: I0121 13:05:58.265741 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f5effab5-52b4-4fb7-bbe7-071784ce84d4","Type":"ContainerStarted","Data":"9fac4473abe36883792572343ddb2a14424bac7187bc7a716fe95ff08a0d9032"} Jan 21 13:05:58 crc kubenswrapper[4765]: I0121 13:05:58.267609 4765 generic.go:334] "Generic (PLEG): container finished" podID="bde1e264-573c-4186-8b9b-a0cb024d5d91" containerID="2e40057a6fbccea2944492f59aa38edca0e4ff0c227bb4edee1b57c7373222e2" exitCode=0 Jan 21 13:05:58 crc kubenswrapper[4765]: I0121 13:05:58.267700 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrqmv" event={"ID":"bde1e264-573c-4186-8b9b-a0cb024d5d91","Type":"ContainerDied","Data":"2e40057a6fbccea2944492f59aa38edca0e4ff0c227bb4edee1b57c7373222e2"} Jan 21 13:05:58 crc kubenswrapper[4765]: I0121 13:05:58.278317 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"32a0b174-c516-4ed9-9204-e1f15dd18d59","Type":"ContainerStarted","Data":"47225b7789413a8ca919b146b40e0e567d06908c8eec9d82ac4af3b094846a93"} Jan 21 13:05:58 crc kubenswrapper[4765]: I0121 13:05:58.307360 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.307340679 podStartE2EDuration="4.307340679s" podCreationTimestamp="2026-01-21 13:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:58.305677423 +0000 UTC m=+219.323403275" watchObservedRunningTime="2026-01-21 13:05:58.307340679 +0000 UTC m=+219.325066501" Jan 21 13:05:59 crc kubenswrapper[4765]: I0121 13:05:59.286358 4765 generic.go:334] "Generic (PLEG): container finished" podID="44e452dd-2411-4ffb-8b6a-fed70777e6fc" containerID="fb65363089a665ec044a3587a898ad9a012bac66bc50fcb7d89fd52e3e8c49c1" exitCode=0 Jan 21 13:05:59 crc kubenswrapper[4765]: I0121 13:05:59.286478 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wk44t" event={"ID":"44e452dd-2411-4ffb-8b6a-fed70777e6fc","Type":"ContainerDied","Data":"fb65363089a665ec044a3587a898ad9a012bac66bc50fcb7d89fd52e3e8c49c1"} Jan 21 13:05:59 crc kubenswrapper[4765]: I0121 13:05:59.297292 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f5effab5-52b4-4fb7-bbe7-071784ce84d4","Type":"ContainerStarted","Data":"0655f905a132db24122325d21f09990e83f95b807e59013cb9addda559b3fe8a"} Jan 21 13:05:59 crc kubenswrapper[4765]: I0121 13:05:59.334320 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=10.334297117 podStartE2EDuration="10.334297117s" podCreationTimestamp="2026-01-21 13:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:05:59.332542218 +0000 UTC m=+220.350268050" watchObservedRunningTime="2026-01-21 13:05:59.334297117 +0000 UTC m=+220.352022949" Jan 21 13:06:00 crc kubenswrapper[4765]: I0121 13:06:00.303034 4765 generic.go:334] "Generic (PLEG): container finished" podID="f5effab5-52b4-4fb7-bbe7-071784ce84d4" containerID="0655f905a132db24122325d21f09990e83f95b807e59013cb9addda559b3fe8a" exitCode=0 Jan 21 13:06:00 crc kubenswrapper[4765]: I0121 13:06:00.303097 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f5effab5-52b4-4fb7-bbe7-071784ce84d4","Type":"ContainerDied","Data":"0655f905a132db24122325d21f09990e83f95b807e59013cb9addda559b3fe8a"} Jan 21 13:06:01 crc kubenswrapper[4765]: I0121 13:06:01.317691 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wk44t" event={"ID":"44e452dd-2411-4ffb-8b6a-fed70777e6fc","Type":"ContainerStarted","Data":"4a7359f490f833596f7c52bfaa9f5c04e16e73a1fdd8353994ce310ff416dff7"} Jan 21 13:06:01 crc kubenswrapper[4765]: I0121 13:06:01.321605 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrqmv" event={"ID":"bde1e264-573c-4186-8b9b-a0cb024d5d91","Type":"ContainerStarted","Data":"8abe7575ee5ee2de03eef01ef93d822fb955841dffd71024dd19b0d7d2978cd1"} Jan 21 13:06:01 crc kubenswrapper[4765]: I0121 13:06:01.344691 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wk44t" podStartSLOduration=5.228740421 podStartE2EDuration="51.344666822s" podCreationTimestamp="2026-01-21 13:05:10 +0000 UTC" firstStartedPulling="2026-01-21 13:05:14.81842269 +0000 UTC m=+175.836148512" lastFinishedPulling="2026-01-21 13:06:00.934349091 +0000 UTC m=+221.952074913" observedRunningTime="2026-01-21 13:06:01.340406644 +0000 UTC m=+222.358132466" watchObservedRunningTime="2026-01-21 13:06:01.344666822 +0000 UTC m=+222.362392644" Jan 21 13:06:01 crc kubenswrapper[4765]: I0121 13:06:01.370839 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rrqmv" podStartSLOduration=6.176991898 podStartE2EDuration="53.370816322s" podCreationTimestamp="2026-01-21 13:05:08 +0000 UTC" firstStartedPulling="2026-01-21 13:05:12.698964598 +0000 UTC m=+173.716690420" lastFinishedPulling="2026-01-21 13:05:59.892789022 +0000 UTC m=+220.910514844" observedRunningTime="2026-01-21 13:06:01.366945956 +0000 UTC m=+222.384671768" watchObservedRunningTime="2026-01-21 13:06:01.370816322 +0000 UTC m=+222.388542144" Jan 21 13:06:01 crc kubenswrapper[4765]: I0121 13:06:01.606270 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:06:01 crc kubenswrapper[4765]: I0121 13:06:01.724883 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5effab5-52b4-4fb7-bbe7-071784ce84d4-kubelet-dir\") pod \"f5effab5-52b4-4fb7-bbe7-071784ce84d4\" (UID: \"f5effab5-52b4-4fb7-bbe7-071784ce84d4\") " Jan 21 13:06:01 crc kubenswrapper[4765]: I0121 13:06:01.725037 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5effab5-52b4-4fb7-bbe7-071784ce84d4-kube-api-access\") pod \"f5effab5-52b4-4fb7-bbe7-071784ce84d4\" (UID: \"f5effab5-52b4-4fb7-bbe7-071784ce84d4\") " Jan 21 13:06:01 crc kubenswrapper[4765]: I0121 13:06:01.725419 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5effab5-52b4-4fb7-bbe7-071784ce84d4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f5effab5-52b4-4fb7-bbe7-071784ce84d4" (UID: "f5effab5-52b4-4fb7-bbe7-071784ce84d4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:06:01 crc kubenswrapper[4765]: I0121 13:06:01.725768 4765 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5effab5-52b4-4fb7-bbe7-071784ce84d4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:01 crc kubenswrapper[4765]: I0121 13:06:01.737631 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5effab5-52b4-4fb7-bbe7-071784ce84d4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f5effab5-52b4-4fb7-bbe7-071784ce84d4" (UID: "f5effab5-52b4-4fb7-bbe7-071784ce84d4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:06:01 crc kubenswrapper[4765]: I0121 13:06:01.827569 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5effab5-52b4-4fb7-bbe7-071784ce84d4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:02 crc kubenswrapper[4765]: I0121 13:06:02.330309 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f5effab5-52b4-4fb7-bbe7-071784ce84d4","Type":"ContainerDied","Data":"9fac4473abe36883792572343ddb2a14424bac7187bc7a716fe95ff08a0d9032"} Jan 21 13:06:02 crc kubenswrapper[4765]: I0121 13:06:02.330958 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fac4473abe36883792572343ddb2a14424bac7187bc7a716fe95ff08a0d9032" Jan 21 13:06:02 crc kubenswrapper[4765]: I0121 13:06:02.330433 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 13:06:04 crc kubenswrapper[4765]: I0121 13:06:04.345781 4765 generic.go:334] "Generic (PLEG): container finished" podID="7db200a0-358b-415c-960e-cec8935a0435" containerID="ef1178126537eff264f2fec328637074a56ecd58eb0734de95a3b11827e64e2d" exitCode=0 Jan 21 13:06:04 crc kubenswrapper[4765]: I0121 13:06:04.345887 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvt47" event={"ID":"7db200a0-358b-415c-960e-cec8935a0435","Type":"ContainerDied","Data":"ef1178126537eff264f2fec328637074a56ecd58eb0734de95a3b11827e64e2d"} Jan 21 13:06:04 crc kubenswrapper[4765]: I0121 13:06:04.348794 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pg48" event={"ID":"1370386e-d1d5-471c-a3cc-fcbc7649a549","Type":"ContainerStarted","Data":"3220c81ab59dfbb9a0633c250991f1d610cf480fabb9e45f667e16ecdf936676"} Jan 21 13:06:05 crc kubenswrapper[4765]: I0121 13:06:05.356282 4765 generic.go:334] "Generic (PLEG): container finished" podID="1370386e-d1d5-471c-a3cc-fcbc7649a549" containerID="3220c81ab59dfbb9a0633c250991f1d610cf480fabb9e45f667e16ecdf936676" exitCode=0 Jan 21 13:06:05 crc kubenswrapper[4765]: I0121 13:06:05.356333 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pg48" event={"ID":"1370386e-d1d5-471c-a3cc-fcbc7649a549","Type":"ContainerDied","Data":"3220c81ab59dfbb9a0633c250991f1d610cf480fabb9e45f667e16ecdf936676"} Jan 21 13:06:06 crc kubenswrapper[4765]: I0121 13:06:06.363999 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvt47" event={"ID":"7db200a0-358b-415c-960e-cec8935a0435","Type":"ContainerStarted","Data":"b76ada1806d232231d80e25fb8396130e9d15290691e61209592c8d65cdcc0ff"} Jan 21 13:06:06 crc kubenswrapper[4765]: I0121 13:06:06.390950 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bvt47" podStartSLOduration=6.364847648 podStartE2EDuration="1m0.390929152s" podCreationTimestamp="2026-01-21 13:05:06 +0000 UTC" firstStartedPulling="2026-01-21 13:05:11.504242815 +0000 UTC m=+172.521968637" lastFinishedPulling="2026-01-21 13:06:05.530324319 +0000 UTC m=+226.548050141" observedRunningTime="2026-01-21 13:06:06.38615335 +0000 UTC m=+227.403879182" watchObservedRunningTime="2026-01-21 13:06:06.390929152 +0000 UTC m=+227.408654974" Jan 21 13:06:07 crc kubenswrapper[4765]: I0121 13:06:07.521310 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:06:07 crc kubenswrapper[4765]: I0121 13:06:07.521847 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:06:08 crc kubenswrapper[4765]: I0121 13:06:08.376777 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pg48" event={"ID":"1370386e-d1d5-471c-a3cc-fcbc7649a549","Type":"ContainerStarted","Data":"104973b87a05e1b4152e671cd38eaeeae50bba60b0c523833591131d44ae49d6"} Jan 21 13:06:08 crc kubenswrapper[4765]: I0121 13:06:08.736642 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bvt47" podUID="7db200a0-358b-415c-960e-cec8935a0435" containerName="registry-server" probeResult="failure" output=< Jan 21 13:06:08 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Jan 21 13:06:08 crc kubenswrapper[4765]: > Jan 21 13:06:09 crc kubenswrapper[4765]: I0121 13:06:09.356684 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:06:09 crc kubenswrapper[4765]: I0121 13:06:09.357317 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:06:09 crc kubenswrapper[4765]: I0121 13:06:09.636051 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8pg48" podStartSLOduration=8.552951482 podStartE2EDuration="1m3.636031932s" podCreationTimestamp="2026-01-21 13:05:06 +0000 UTC" firstStartedPulling="2026-01-21 13:05:11.383465946 +0000 UTC m=+172.401191768" lastFinishedPulling="2026-01-21 13:06:06.466546396 +0000 UTC m=+227.484272218" observedRunningTime="2026-01-21 13:06:08.406787298 +0000 UTC m=+229.424513140" watchObservedRunningTime="2026-01-21 13:06:09.636031932 +0000 UTC m=+230.653757754" Jan 21 13:06:10 crc kubenswrapper[4765]: I0121 13:06:10.400595 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rrqmv" podUID="bde1e264-573c-4186-8b9b-a0cb024d5d91" containerName="registry-server" probeResult="failure" output=< Jan 21 13:06:10 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Jan 21 13:06:10 crc kubenswrapper[4765]: > Jan 21 13:06:11 crc kubenswrapper[4765]: I0121 13:06:11.018002 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:06:11 crc kubenswrapper[4765]: I0121 13:06:11.018084 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:06:11 crc kubenswrapper[4765]: I0121 13:06:11.340065 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:06:11 crc kubenswrapper[4765]: I0121 13:06:11.440324 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:06:14 crc kubenswrapper[4765]: I0121 13:06:14.446288 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:06:14 crc kubenswrapper[4765]: I0121 13:06:14.446986 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:06:14 crc kubenswrapper[4765]: I0121 13:06:14.447076 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:06:14 crc kubenswrapper[4765]: I0121 13:06:14.448187 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:06:14 crc kubenswrapper[4765]: I0121 13:06:14.448464 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae" gracePeriod=600 Jan 21 13:06:15 crc kubenswrapper[4765]: I0121 13:06:15.481990 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wk44t"] Jan 21 13:06:15 crc kubenswrapper[4765]: I0121 13:06:15.482369 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wk44t" podUID="44e452dd-2411-4ffb-8b6a-fed70777e6fc" containerName="registry-server" containerID="cri-o://4a7359f490f833596f7c52bfaa9f5c04e16e73a1fdd8353994ce310ff416dff7" gracePeriod=2 Jan 21 13:06:17 crc kubenswrapper[4765]: I0121 13:06:17.433955 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae" exitCode=0 Jan 21 13:06:17 crc kubenswrapper[4765]: I0121 13:06:17.434024 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae"} Jan 21 13:06:17 crc kubenswrapper[4765]: I0121 13:06:17.570499 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:06:17 crc kubenswrapper[4765]: I0121 13:06:17.622054 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:06:17 crc kubenswrapper[4765]: I0121 13:06:17.751520 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:06:17 crc kubenswrapper[4765]: I0121 13:06:17.753520 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:06:17 crc kubenswrapper[4765]: I0121 13:06:17.794322 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:06:17 crc kubenswrapper[4765]: I0121 13:06:17.880425 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bvt47"] Jan 21 13:06:18 crc kubenswrapper[4765]: I0121 13:06:18.603537 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.337427 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.422159 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.455136 4765 generic.go:334] "Generic (PLEG): container finished" podID="44e452dd-2411-4ffb-8b6a-fed70777e6fc" containerID="4a7359f490f833596f7c52bfaa9f5c04e16e73a1fdd8353994ce310ff416dff7" exitCode=0 Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.455251 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wk44t" event={"ID":"44e452dd-2411-4ffb-8b6a-fed70777e6fc","Type":"ContainerDied","Data":"4a7359f490f833596f7c52bfaa9f5c04e16e73a1fdd8353994ce310ff416dff7"} Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.455317 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wk44t" event={"ID":"44e452dd-2411-4ffb-8b6a-fed70777e6fc","Type":"ContainerDied","Data":"cf3b8d95df9333d6c620f0d525f8a2f45d16642ce7034f5f2ff89631ae5f06fe"} Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.455343 4765 scope.go:117] "RemoveContainer" containerID="4a7359f490f833596f7c52bfaa9f5c04e16e73a1fdd8353994ce310ff416dff7" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.455433 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bvt47" podUID="7db200a0-358b-415c-960e-cec8935a0435" containerName="registry-server" containerID="cri-o://b76ada1806d232231d80e25fb8396130e9d15290691e61209592c8d65cdcc0ff" gracePeriod=2 Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.455790 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wk44t" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.470682 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.503852 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e452dd-2411-4ffb-8b6a-fed70777e6fc-utilities\") pod \"44e452dd-2411-4ffb-8b6a-fed70777e6fc\" (UID: \"44e452dd-2411-4ffb-8b6a-fed70777e6fc\") " Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.503991 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnfh2\" (UniqueName: \"kubernetes.io/projected/44e452dd-2411-4ffb-8b6a-fed70777e6fc-kube-api-access-xnfh2\") pod \"44e452dd-2411-4ffb-8b6a-fed70777e6fc\" (UID: \"44e452dd-2411-4ffb-8b6a-fed70777e6fc\") " Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.504060 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e452dd-2411-4ffb-8b6a-fed70777e6fc-catalog-content\") pod \"44e452dd-2411-4ffb-8b6a-fed70777e6fc\" (UID: \"44e452dd-2411-4ffb-8b6a-fed70777e6fc\") " Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.505195 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e452dd-2411-4ffb-8b6a-fed70777e6fc-utilities" (OuterVolumeSpecName: "utilities") pod "44e452dd-2411-4ffb-8b6a-fed70777e6fc" (UID: "44e452dd-2411-4ffb-8b6a-fed70777e6fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.523886 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e452dd-2411-4ffb-8b6a-fed70777e6fc-kube-api-access-xnfh2" (OuterVolumeSpecName: "kube-api-access-xnfh2") pod "44e452dd-2411-4ffb-8b6a-fed70777e6fc" (UID: "44e452dd-2411-4ffb-8b6a-fed70777e6fc"). InnerVolumeSpecName "kube-api-access-xnfh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.605435 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e452dd-2411-4ffb-8b6a-fed70777e6fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.605486 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnfh2\" (UniqueName: \"kubernetes.io/projected/44e452dd-2411-4ffb-8b6a-fed70777e6fc-kube-api-access-xnfh2\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.644156 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e452dd-2411-4ffb-8b6a-fed70777e6fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44e452dd-2411-4ffb-8b6a-fed70777e6fc" (UID: "44e452dd-2411-4ffb-8b6a-fed70777e6fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.666495 4765 scope.go:117] "RemoveContainer" containerID="fb65363089a665ec044a3587a898ad9a012bac66bc50fcb7d89fd52e3e8c49c1" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.708247 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e452dd-2411-4ffb-8b6a-fed70777e6fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.790168 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wk44t"] Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.795157 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wk44t"] Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.897466 4765 scope.go:117] "RemoveContainer" containerID="69602753f4342fe98d31e75cce43392073ac15e2f391ab1be38803456d196019" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.921497 4765 scope.go:117] "RemoveContainer" containerID="4a7359f490f833596f7c52bfaa9f5c04e16e73a1fdd8353994ce310ff416dff7" Jan 21 13:06:19 crc kubenswrapper[4765]: E0121 13:06:19.922023 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a7359f490f833596f7c52bfaa9f5c04e16e73a1fdd8353994ce310ff416dff7\": container with ID starting with 4a7359f490f833596f7c52bfaa9f5c04e16e73a1fdd8353994ce310ff416dff7 not found: ID does not exist" containerID="4a7359f490f833596f7c52bfaa9f5c04e16e73a1fdd8353994ce310ff416dff7" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.922081 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a7359f490f833596f7c52bfaa9f5c04e16e73a1fdd8353994ce310ff416dff7"} err="failed to get container status \"4a7359f490f833596f7c52bfaa9f5c04e16e73a1fdd8353994ce310ff416dff7\": rpc error: code = NotFound desc = could not find container \"4a7359f490f833596f7c52bfaa9f5c04e16e73a1fdd8353994ce310ff416dff7\": container with ID starting with 4a7359f490f833596f7c52bfaa9f5c04e16e73a1fdd8353994ce310ff416dff7 not found: ID does not exist" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.922128 4765 scope.go:117] "RemoveContainer" containerID="fb65363089a665ec044a3587a898ad9a012bac66bc50fcb7d89fd52e3e8c49c1" Jan 21 13:06:19 crc kubenswrapper[4765]: E0121 13:06:19.922705 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb65363089a665ec044a3587a898ad9a012bac66bc50fcb7d89fd52e3e8c49c1\": container with ID starting with fb65363089a665ec044a3587a898ad9a012bac66bc50fcb7d89fd52e3e8c49c1 not found: ID does not exist" containerID="fb65363089a665ec044a3587a898ad9a012bac66bc50fcb7d89fd52e3e8c49c1" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.922734 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb65363089a665ec044a3587a898ad9a012bac66bc50fcb7d89fd52e3e8c49c1"} err="failed to get container status \"fb65363089a665ec044a3587a898ad9a012bac66bc50fcb7d89fd52e3e8c49c1\": rpc error: code = NotFound desc = could not find container \"fb65363089a665ec044a3587a898ad9a012bac66bc50fcb7d89fd52e3e8c49c1\": container with ID starting with fb65363089a665ec044a3587a898ad9a012bac66bc50fcb7d89fd52e3e8c49c1 not found: ID does not exist" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.922755 4765 scope.go:117] "RemoveContainer" containerID="69602753f4342fe98d31e75cce43392073ac15e2f391ab1be38803456d196019" Jan 21 13:06:19 crc kubenswrapper[4765]: E0121 13:06:19.923255 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69602753f4342fe98d31e75cce43392073ac15e2f391ab1be38803456d196019\": container with ID starting with 69602753f4342fe98d31e75cce43392073ac15e2f391ab1be38803456d196019 not found: ID does not exist" containerID="69602753f4342fe98d31e75cce43392073ac15e2f391ab1be38803456d196019" Jan 21 13:06:19 crc kubenswrapper[4765]: I0121 13:06:19.923278 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69602753f4342fe98d31e75cce43392073ac15e2f391ab1be38803456d196019"} err="failed to get container status \"69602753f4342fe98d31e75cce43392073ac15e2f391ab1be38803456d196019\": rpc error: code = NotFound desc = could not find container \"69602753f4342fe98d31e75cce43392073ac15e2f391ab1be38803456d196019\": container with ID starting with 69602753f4342fe98d31e75cce43392073ac15e2f391ab1be38803456d196019 not found: ID does not exist" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.163274 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.319205 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db200a0-358b-415c-960e-cec8935a0435-catalog-content\") pod \"7db200a0-358b-415c-960e-cec8935a0435\" (UID: \"7db200a0-358b-415c-960e-cec8935a0435\") " Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.319313 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx5bm\" (UniqueName: \"kubernetes.io/projected/7db200a0-358b-415c-960e-cec8935a0435-kube-api-access-bx5bm\") pod \"7db200a0-358b-415c-960e-cec8935a0435\" (UID: \"7db200a0-358b-415c-960e-cec8935a0435\") " Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.319380 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db200a0-358b-415c-960e-cec8935a0435-utilities\") pod \"7db200a0-358b-415c-960e-cec8935a0435\" (UID: \"7db200a0-358b-415c-960e-cec8935a0435\") " Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.320785 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7db200a0-358b-415c-960e-cec8935a0435-utilities" (OuterVolumeSpecName: "utilities") pod "7db200a0-358b-415c-960e-cec8935a0435" (UID: "7db200a0-358b-415c-960e-cec8935a0435"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.341562 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db200a0-358b-415c-960e-cec8935a0435-kube-api-access-bx5bm" (OuterVolumeSpecName: "kube-api-access-bx5bm") pod "7db200a0-358b-415c-960e-cec8935a0435" (UID: "7db200a0-358b-415c-960e-cec8935a0435"). InnerVolumeSpecName "kube-api-access-bx5bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.386150 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7db200a0-358b-415c-960e-cec8935a0435-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7db200a0-358b-415c-960e-cec8935a0435" (UID: "7db200a0-358b-415c-960e-cec8935a0435"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.420709 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7db200a0-358b-415c-960e-cec8935a0435-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.420742 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx5bm\" (UniqueName: \"kubernetes.io/projected/7db200a0-358b-415c-960e-cec8935a0435-kube-api-access-bx5bm\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.420752 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7db200a0-358b-415c-960e-cec8935a0435-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.465491 4765 generic.go:334] "Generic (PLEG): container finished" podID="8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" containerID="c0df5f6988cd3207387ebacc21ed589b506f5b47953d43a7b0387d144b0792e0" exitCode=0 Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.465603 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csdrp" event={"ID":"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85","Type":"ContainerDied","Data":"c0df5f6988cd3207387ebacc21ed589b506f5b47953d43a7b0387d144b0792e0"} Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.479771 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7f8m" event={"ID":"080522e6-050a-4df7-afe5-2476e455e157","Type":"ContainerStarted","Data":"e4e74cf0e1d966f812bc72af991a6e65f47cfc057b78aacfe740b216a98d8f02"} Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.482908 4765 generic.go:334] "Generic (PLEG): container finished" podID="7db200a0-358b-415c-960e-cec8935a0435" containerID="b76ada1806d232231d80e25fb8396130e9d15290691e61209592c8d65cdcc0ff" exitCode=0 Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.482971 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvt47" event={"ID":"7db200a0-358b-415c-960e-cec8935a0435","Type":"ContainerDied","Data":"b76ada1806d232231d80e25fb8396130e9d15290691e61209592c8d65cdcc0ff"} Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.483006 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvt47" event={"ID":"7db200a0-358b-415c-960e-cec8935a0435","Type":"ContainerDied","Data":"92e2d3dcaed6f29693f4f2cf15b2bb934ff39989e97f8cf4e49e0a2658bb018e"} Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.483025 4765 scope.go:117] "RemoveContainer" containerID="b76ada1806d232231d80e25fb8396130e9d15290691e61209592c8d65cdcc0ff" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.483120 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvt47" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.514541 4765 generic.go:334] "Generic (PLEG): container finished" podID="0c876b68-6eab-460d-983d-51514e30fbd1" containerID="d6a064c62809a4de969b28c31d3f235db37ecdc7f7e1a76c06e3d4543929357d" exitCode=0 Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.514706 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4dt7" event={"ID":"0c876b68-6eab-460d-983d-51514e30fbd1","Type":"ContainerDied","Data":"d6a064c62809a4de969b28c31d3f235db37ecdc7f7e1a76c06e3d4543929357d"} Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.526259 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"f7a5ac8d24692585ce478eff1513b2ab0b0e70857dfc544d9cfa881f0e004073"} Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.536000 4765 scope.go:117] "RemoveContainer" containerID="ef1178126537eff264f2fec328637074a56ecd58eb0734de95a3b11827e64e2d" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.556835 4765 generic.go:334] "Generic (PLEG): container finished" podID="4bd12a18-d34b-4d96-9409-f26a13dc93f5" containerID="870fb87cd9366a24bd6c45586c18561e14af0e254ffb111f70b457590026fc4f" exitCode=0 Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.557324 4765 scope.go:117] "RemoveContainer" containerID="5ca86b1827e1bc08a3f8fd97c282aaedff58f1ebf4c57c250ba6a3b2533d6f80" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.557539 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwg7s" event={"ID":"4bd12a18-d34b-4d96-9409-f26a13dc93f5","Type":"ContainerDied","Data":"870fb87cd9366a24bd6c45586c18561e14af0e254ffb111f70b457590026fc4f"} Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.582638 4765 scope.go:117] "RemoveContainer" containerID="b76ada1806d232231d80e25fb8396130e9d15290691e61209592c8d65cdcc0ff" Jan 21 13:06:20 crc kubenswrapper[4765]: E0121 13:06:20.583261 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b76ada1806d232231d80e25fb8396130e9d15290691e61209592c8d65cdcc0ff\": container with ID starting with b76ada1806d232231d80e25fb8396130e9d15290691e61209592c8d65cdcc0ff not found: ID does not exist" containerID="b76ada1806d232231d80e25fb8396130e9d15290691e61209592c8d65cdcc0ff" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.583329 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76ada1806d232231d80e25fb8396130e9d15290691e61209592c8d65cdcc0ff"} err="failed to get container status \"b76ada1806d232231d80e25fb8396130e9d15290691e61209592c8d65cdcc0ff\": rpc error: code = NotFound desc = could not find container \"b76ada1806d232231d80e25fb8396130e9d15290691e61209592c8d65cdcc0ff\": container with ID starting with b76ada1806d232231d80e25fb8396130e9d15290691e61209592c8d65cdcc0ff not found: ID does not exist" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.583394 4765 scope.go:117] "RemoveContainer" containerID="ef1178126537eff264f2fec328637074a56ecd58eb0734de95a3b11827e64e2d" Jan 21 13:06:20 crc kubenswrapper[4765]: E0121 13:06:20.583862 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef1178126537eff264f2fec328637074a56ecd58eb0734de95a3b11827e64e2d\": container with ID starting with ef1178126537eff264f2fec328637074a56ecd58eb0734de95a3b11827e64e2d not found: ID does not exist" containerID="ef1178126537eff264f2fec328637074a56ecd58eb0734de95a3b11827e64e2d" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.583967 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef1178126537eff264f2fec328637074a56ecd58eb0734de95a3b11827e64e2d"} err="failed to get container status \"ef1178126537eff264f2fec328637074a56ecd58eb0734de95a3b11827e64e2d\": rpc error: code = NotFound desc = could not find container \"ef1178126537eff264f2fec328637074a56ecd58eb0734de95a3b11827e64e2d\": container with ID starting with ef1178126537eff264f2fec328637074a56ecd58eb0734de95a3b11827e64e2d not found: ID does not exist" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.584061 4765 scope.go:117] "RemoveContainer" containerID="5ca86b1827e1bc08a3f8fd97c282aaedff58f1ebf4c57c250ba6a3b2533d6f80" Jan 21 13:06:20 crc kubenswrapper[4765]: E0121 13:06:20.585094 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca86b1827e1bc08a3f8fd97c282aaedff58f1ebf4c57c250ba6a3b2533d6f80\": container with ID starting with 5ca86b1827e1bc08a3f8fd97c282aaedff58f1ebf4c57c250ba6a3b2533d6f80 not found: ID does not exist" containerID="5ca86b1827e1bc08a3f8fd97c282aaedff58f1ebf4c57c250ba6a3b2533d6f80" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.585118 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca86b1827e1bc08a3f8fd97c282aaedff58f1ebf4c57c250ba6a3b2533d6f80"} err="failed to get container status \"5ca86b1827e1bc08a3f8fd97c282aaedff58f1ebf4c57c250ba6a3b2533d6f80\": rpc error: code = NotFound desc = could not find container \"5ca86b1827e1bc08a3f8fd97c282aaedff58f1ebf4c57c250ba6a3b2533d6f80\": container with ID starting with 5ca86b1827e1bc08a3f8fd97c282aaedff58f1ebf4c57c250ba6a3b2533d6f80 not found: ID does not exist" Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.603153 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bvt47"] Jan 21 13:06:20 crc kubenswrapper[4765]: I0121 13:06:20.605776 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bvt47"] Jan 21 13:06:21 crc kubenswrapper[4765]: I0121 13:06:21.565217 4765 generic.go:334] "Generic (PLEG): container finished" podID="080522e6-050a-4df7-afe5-2476e455e157" containerID="e4e74cf0e1d966f812bc72af991a6e65f47cfc057b78aacfe740b216a98d8f02" exitCode=0 Jan 21 13:06:21 crc kubenswrapper[4765]: I0121 13:06:21.565250 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7f8m" event={"ID":"080522e6-050a-4df7-afe5-2476e455e157","Type":"ContainerDied","Data":"e4e74cf0e1d966f812bc72af991a6e65f47cfc057b78aacfe740b216a98d8f02"} Jan 21 13:06:21 crc kubenswrapper[4765]: I0121 13:06:21.571435 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4dt7" event={"ID":"0c876b68-6eab-460d-983d-51514e30fbd1","Type":"ContainerStarted","Data":"693aa52d2385d4e72f9fd097808fcb61f345f417cc77ac975bfc049e3ee84073"} Jan 21 13:06:21 crc kubenswrapper[4765]: I0121 13:06:21.577171 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwg7s" event={"ID":"4bd12a18-d34b-4d96-9409-f26a13dc93f5","Type":"ContainerStarted","Data":"c6105f3175bbb9953416d8896274989977f2a79bfa64ab1de1508c18fa4d803f"} Jan 21 13:06:21 crc kubenswrapper[4765]: I0121 13:06:21.580553 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csdrp" event={"ID":"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85","Type":"ContainerStarted","Data":"58c292d8ab3268cd28e5b8aecb339e54f89484da567fe9425ca52492b24f2b5d"} Jan 21 13:06:21 crc kubenswrapper[4765]: I0121 13:06:21.619523 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p4dt7" podStartSLOduration=5.02207226 podStartE2EDuration="1m14.619494942s" podCreationTimestamp="2026-01-21 13:05:07 +0000 UTC" firstStartedPulling="2026-01-21 13:05:11.366427427 +0000 UTC m=+172.384153249" lastFinishedPulling="2026-01-21 13:06:20.963850109 +0000 UTC m=+241.981575931" observedRunningTime="2026-01-21 13:06:21.616901511 +0000 UTC m=+242.634627353" watchObservedRunningTime="2026-01-21 13:06:21.619494942 +0000 UTC m=+242.637220764" Jan 21 13:06:21 crc kubenswrapper[4765]: I0121 13:06:21.623822 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e452dd-2411-4ffb-8b6a-fed70777e6fc" path="/var/lib/kubelet/pods/44e452dd-2411-4ffb-8b6a-fed70777e6fc/volumes" Jan 21 13:06:21 crc kubenswrapper[4765]: I0121 13:06:21.624611 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7db200a0-358b-415c-960e-cec8935a0435" path="/var/lib/kubelet/pods/7db200a0-358b-415c-960e-cec8935a0435/volumes" Jan 21 13:06:21 crc kubenswrapper[4765]: I0121 13:06:21.640358 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zwg7s" podStartSLOduration=5.2010106369999995 podStartE2EDuration="1m13.640335897s" podCreationTimestamp="2026-01-21 13:05:08 +0000 UTC" firstStartedPulling="2026-01-21 13:05:12.796087915 +0000 UTC m=+173.813813737" lastFinishedPulling="2026-01-21 13:06:21.235413175 +0000 UTC m=+242.253138997" observedRunningTime="2026-01-21 13:06:21.639488153 +0000 UTC m=+242.657213985" watchObservedRunningTime="2026-01-21 13:06:21.640335897 +0000 UTC m=+242.658061709" Jan 21 13:06:21 crc kubenswrapper[4765]: I0121 13:06:21.663804 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-csdrp" podStartSLOduration=5.841097831 podStartE2EDuration="1m15.663783673s" podCreationTimestamp="2026-01-21 13:05:06 +0000 UTC" firstStartedPulling="2026-01-21 13:05:11.395447247 +0000 UTC m=+172.413173069" lastFinishedPulling="2026-01-21 13:06:21.218133089 +0000 UTC m=+242.235858911" observedRunningTime="2026-01-21 13:06:21.660743109 +0000 UTC m=+242.678468931" watchObservedRunningTime="2026-01-21 13:06:21.663783673 +0000 UTC m=+242.681509495" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.081194 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrqmv"] Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.081570 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rrqmv" podUID="bde1e264-573c-4186-8b9b-a0cb024d5d91" containerName="registry-server" containerID="cri-o://8abe7575ee5ee2de03eef01ef93d822fb955841dffd71024dd19b0d7d2978cd1" gracePeriod=2 Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.463093 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.562128 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde1e264-573c-4186-8b9b-a0cb024d5d91-catalog-content\") pod \"bde1e264-573c-4186-8b9b-a0cb024d5d91\" (UID: \"bde1e264-573c-4186-8b9b-a0cb024d5d91\") " Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.574797 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7whw\" (UniqueName: \"kubernetes.io/projected/bde1e264-573c-4186-8b9b-a0cb024d5d91-kube-api-access-l7whw\") pod \"bde1e264-573c-4186-8b9b-a0cb024d5d91\" (UID: \"bde1e264-573c-4186-8b9b-a0cb024d5d91\") " Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.574857 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde1e264-573c-4186-8b9b-a0cb024d5d91-utilities\") pod \"bde1e264-573c-4186-8b9b-a0cb024d5d91\" (UID: \"bde1e264-573c-4186-8b9b-a0cb024d5d91\") " Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.575904 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bde1e264-573c-4186-8b9b-a0cb024d5d91-utilities" (OuterVolumeSpecName: "utilities") pod "bde1e264-573c-4186-8b9b-a0cb024d5d91" (UID: "bde1e264-573c-4186-8b9b-a0cb024d5d91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.586263 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bde1e264-573c-4186-8b9b-a0cb024d5d91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bde1e264-573c-4186-8b9b-a0cb024d5d91" (UID: "bde1e264-573c-4186-8b9b-a0cb024d5d91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.586427 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde1e264-573c-4186-8b9b-a0cb024d5d91-kube-api-access-l7whw" (OuterVolumeSpecName: "kube-api-access-l7whw") pod "bde1e264-573c-4186-8b9b-a0cb024d5d91" (UID: "bde1e264-573c-4186-8b9b-a0cb024d5d91"). InnerVolumeSpecName "kube-api-access-l7whw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.594010 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7f8m" event={"ID":"080522e6-050a-4df7-afe5-2476e455e157","Type":"ContainerStarted","Data":"498e752f0468710b1205a2632de0e291557e6ba1713647312e5db1c4642c6f48"} Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.606403 4765 generic.go:334] "Generic (PLEG): container finished" podID="bde1e264-573c-4186-8b9b-a0cb024d5d91" containerID="8abe7575ee5ee2de03eef01ef93d822fb955841dffd71024dd19b0d7d2978cd1" exitCode=0 Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.606485 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrqmv" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.606500 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrqmv" event={"ID":"bde1e264-573c-4186-8b9b-a0cb024d5d91","Type":"ContainerDied","Data":"8abe7575ee5ee2de03eef01ef93d822fb955841dffd71024dd19b0d7d2978cd1"} Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.607114 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrqmv" event={"ID":"bde1e264-573c-4186-8b9b-a0cb024d5d91","Type":"ContainerDied","Data":"b3df9c75120af2ae1fa720405e149952d6a5850229823bbe20bfc4e8db71067d"} Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.607142 4765 scope.go:117] "RemoveContainer" containerID="8abe7575ee5ee2de03eef01ef93d822fb955841dffd71024dd19b0d7d2978cd1" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.632281 4765 scope.go:117] "RemoveContainer" containerID="2e40057a6fbccea2944492f59aa38edca0e4ff0c227bb4edee1b57c7373222e2" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.640877 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x7f8m" podStartSLOduration=4.854515026 podStartE2EDuration="1m13.64085542s" podCreationTimestamp="2026-01-21 13:05:10 +0000 UTC" firstStartedPulling="2026-01-21 13:05:13.795510744 +0000 UTC m=+174.813236566" lastFinishedPulling="2026-01-21 13:06:22.581851138 +0000 UTC m=+243.599576960" observedRunningTime="2026-01-21 13:06:23.624137589 +0000 UTC m=+244.641863421" watchObservedRunningTime="2026-01-21 13:06:23.64085542 +0000 UTC m=+244.658581242" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.641821 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrqmv"] Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.649991 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrqmv"] Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.660761 4765 scope.go:117] "RemoveContainer" containerID="21bed4f1353abc190dd8f3f20f5667a8203657de345cb8c5cfb77ba69a812f88" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.676401 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7whw\" (UniqueName: \"kubernetes.io/projected/bde1e264-573c-4186-8b9b-a0cb024d5d91-kube-api-access-l7whw\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.677354 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde1e264-573c-4186-8b9b-a0cb024d5d91-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.677396 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde1e264-573c-4186-8b9b-a0cb024d5d91-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.680453 4765 scope.go:117] "RemoveContainer" containerID="8abe7575ee5ee2de03eef01ef93d822fb955841dffd71024dd19b0d7d2978cd1" Jan 21 13:06:23 crc kubenswrapper[4765]: E0121 13:06:23.681001 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8abe7575ee5ee2de03eef01ef93d822fb955841dffd71024dd19b0d7d2978cd1\": container with ID starting with 8abe7575ee5ee2de03eef01ef93d822fb955841dffd71024dd19b0d7d2978cd1 not found: ID does not exist" containerID="8abe7575ee5ee2de03eef01ef93d822fb955841dffd71024dd19b0d7d2978cd1" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.681050 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8abe7575ee5ee2de03eef01ef93d822fb955841dffd71024dd19b0d7d2978cd1"} err="failed to get container status \"8abe7575ee5ee2de03eef01ef93d822fb955841dffd71024dd19b0d7d2978cd1\": rpc error: code = NotFound desc = could not find container \"8abe7575ee5ee2de03eef01ef93d822fb955841dffd71024dd19b0d7d2978cd1\": container with ID starting with 8abe7575ee5ee2de03eef01ef93d822fb955841dffd71024dd19b0d7d2978cd1 not found: ID does not exist" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.681086 4765 scope.go:117] "RemoveContainer" containerID="2e40057a6fbccea2944492f59aa38edca0e4ff0c227bb4edee1b57c7373222e2" Jan 21 13:06:23 crc kubenswrapper[4765]: E0121 13:06:23.681983 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e40057a6fbccea2944492f59aa38edca0e4ff0c227bb4edee1b57c7373222e2\": container with ID starting with 2e40057a6fbccea2944492f59aa38edca0e4ff0c227bb4edee1b57c7373222e2 not found: ID does not exist" containerID="2e40057a6fbccea2944492f59aa38edca0e4ff0c227bb4edee1b57c7373222e2" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.682036 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e40057a6fbccea2944492f59aa38edca0e4ff0c227bb4edee1b57c7373222e2"} err="failed to get container status \"2e40057a6fbccea2944492f59aa38edca0e4ff0c227bb4edee1b57c7373222e2\": rpc error: code = NotFound desc = could not find container \"2e40057a6fbccea2944492f59aa38edca0e4ff0c227bb4edee1b57c7373222e2\": container with ID starting with 2e40057a6fbccea2944492f59aa38edca0e4ff0c227bb4edee1b57c7373222e2 not found: ID does not exist" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.682080 4765 scope.go:117] "RemoveContainer" containerID="21bed4f1353abc190dd8f3f20f5667a8203657de345cb8c5cfb77ba69a812f88" Jan 21 13:06:23 crc kubenswrapper[4765]: E0121 13:06:23.682774 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21bed4f1353abc190dd8f3f20f5667a8203657de345cb8c5cfb77ba69a812f88\": container with ID starting with 21bed4f1353abc190dd8f3f20f5667a8203657de345cb8c5cfb77ba69a812f88 not found: ID does not exist" containerID="21bed4f1353abc190dd8f3f20f5667a8203657de345cb8c5cfb77ba69a812f88" Jan 21 13:06:23 crc kubenswrapper[4765]: I0121 13:06:23.682878 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21bed4f1353abc190dd8f3f20f5667a8203657de345cb8c5cfb77ba69a812f88"} err="failed to get container status \"21bed4f1353abc190dd8f3f20f5667a8203657de345cb8c5cfb77ba69a812f88\": rpc error: code = NotFound desc = could not find container \"21bed4f1353abc190dd8f3f20f5667a8203657de345cb8c5cfb77ba69a812f88\": container with ID starting with 21bed4f1353abc190dd8f3f20f5667a8203657de345cb8c5cfb77ba69a812f88 not found: ID does not exist" Jan 21 13:06:25 crc kubenswrapper[4765]: I0121 13:06:25.622539 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde1e264-573c-4186-8b9b-a0cb024d5d91" path="/var/lib/kubelet/pods/bde1e264-573c-4186-8b9b-a0cb024d5d91/volumes" Jan 21 13:06:27 crc kubenswrapper[4765]: I0121 13:06:27.766371 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:06:27 crc kubenswrapper[4765]: I0121 13:06:27.766763 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:06:27 crc kubenswrapper[4765]: I0121 13:06:27.817124 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:06:27 crc kubenswrapper[4765]: I0121 13:06:27.880239 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:06:27 crc kubenswrapper[4765]: I0121 13:06:27.880318 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:06:27 crc kubenswrapper[4765]: I0121 13:06:27.917580 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:06:28 crc kubenswrapper[4765]: I0121 13:06:28.680742 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:06:28 crc kubenswrapper[4765]: I0121 13:06:28.680900 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:06:28 crc kubenswrapper[4765]: I0121 13:06:28.937728 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:06:28 crc kubenswrapper[4765]: I0121 13:06:28.938340 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:06:28 crc kubenswrapper[4765]: I0121 13:06:28.976100 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:06:29 crc kubenswrapper[4765]: I0121 13:06:29.683839 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:06:29 crc kubenswrapper[4765]: I0121 13:06:29.882709 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p4dt7"] Jan 21 13:06:30 crc kubenswrapper[4765]: I0121 13:06:30.557165 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:06:30 crc kubenswrapper[4765]: I0121 13:06:30.557256 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:06:30 crc kubenswrapper[4765]: I0121 13:06:30.597493 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:06:30 crc kubenswrapper[4765]: I0121 13:06:30.651592 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p4dt7" podUID="0c876b68-6eab-460d-983d-51514e30fbd1" containerName="registry-server" containerID="cri-o://693aa52d2385d4e72f9fd097808fcb61f345f417cc77ac975bfc049e3ee84073" gracePeriod=2 Jan 21 13:06:30 crc kubenswrapper[4765]: I0121 13:06:30.698480 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.549678 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.659859 4765 generic.go:334] "Generic (PLEG): container finished" podID="0c876b68-6eab-460d-983d-51514e30fbd1" containerID="693aa52d2385d4e72f9fd097808fcb61f345f417cc77ac975bfc049e3ee84073" exitCode=0 Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.659951 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4dt7" event={"ID":"0c876b68-6eab-460d-983d-51514e30fbd1","Type":"ContainerDied","Data":"693aa52d2385d4e72f9fd097808fcb61f345f417cc77ac975bfc049e3ee84073"} Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.660016 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4dt7" event={"ID":"0c876b68-6eab-460d-983d-51514e30fbd1","Type":"ContainerDied","Data":"f2b02cd94f625346cfe28438919b588d804745fca03476599d1a8d794ab45820"} Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.659965 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4dt7" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.660036 4765 scope.go:117] "RemoveContainer" containerID="693aa52d2385d4e72f9fd097808fcb61f345f417cc77ac975bfc049e3ee84073" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.678546 4765 scope.go:117] "RemoveContainer" containerID="d6a064c62809a4de969b28c31d3f235db37ecdc7f7e1a76c06e3d4543929357d" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.691460 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c876b68-6eab-460d-983d-51514e30fbd1-utilities\") pod \"0c876b68-6eab-460d-983d-51514e30fbd1\" (UID: \"0c876b68-6eab-460d-983d-51514e30fbd1\") " Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.691529 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x59rb\" (UniqueName: \"kubernetes.io/projected/0c876b68-6eab-460d-983d-51514e30fbd1-kube-api-access-x59rb\") pod \"0c876b68-6eab-460d-983d-51514e30fbd1\" (UID: \"0c876b68-6eab-460d-983d-51514e30fbd1\") " Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.691611 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c876b68-6eab-460d-983d-51514e30fbd1-catalog-content\") pod \"0c876b68-6eab-460d-983d-51514e30fbd1\" (UID: \"0c876b68-6eab-460d-983d-51514e30fbd1\") " Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.693183 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c876b68-6eab-460d-983d-51514e30fbd1-utilities" (OuterVolumeSpecName: "utilities") pod "0c876b68-6eab-460d-983d-51514e30fbd1" (UID: "0c876b68-6eab-460d-983d-51514e30fbd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.694951 4765 scope.go:117] "RemoveContainer" containerID="5c4ae586aaf1ed88205caadbf4c1d649720102ae00810664a8643722efe0e550" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.700328 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c876b68-6eab-460d-983d-51514e30fbd1-kube-api-access-x59rb" (OuterVolumeSpecName: "kube-api-access-x59rb") pod "0c876b68-6eab-460d-983d-51514e30fbd1" (UID: "0c876b68-6eab-460d-983d-51514e30fbd1"). InnerVolumeSpecName "kube-api-access-x59rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.726878 4765 scope.go:117] "RemoveContainer" containerID="693aa52d2385d4e72f9fd097808fcb61f345f417cc77ac975bfc049e3ee84073" Jan 21 13:06:31 crc kubenswrapper[4765]: E0121 13:06:31.727496 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"693aa52d2385d4e72f9fd097808fcb61f345f417cc77ac975bfc049e3ee84073\": container with ID starting with 693aa52d2385d4e72f9fd097808fcb61f345f417cc77ac975bfc049e3ee84073 not found: ID does not exist" containerID="693aa52d2385d4e72f9fd097808fcb61f345f417cc77ac975bfc049e3ee84073" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.727540 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"693aa52d2385d4e72f9fd097808fcb61f345f417cc77ac975bfc049e3ee84073"} err="failed to get container status \"693aa52d2385d4e72f9fd097808fcb61f345f417cc77ac975bfc049e3ee84073\": rpc error: code = NotFound desc = could not find container \"693aa52d2385d4e72f9fd097808fcb61f345f417cc77ac975bfc049e3ee84073\": container with ID starting with 693aa52d2385d4e72f9fd097808fcb61f345f417cc77ac975bfc049e3ee84073 not found: ID does not exist" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.727573 4765 scope.go:117] "RemoveContainer" containerID="d6a064c62809a4de969b28c31d3f235db37ecdc7f7e1a76c06e3d4543929357d" Jan 21 13:06:31 crc kubenswrapper[4765]: E0121 13:06:31.728559 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6a064c62809a4de969b28c31d3f235db37ecdc7f7e1a76c06e3d4543929357d\": container with ID starting with d6a064c62809a4de969b28c31d3f235db37ecdc7f7e1a76c06e3d4543929357d not found: ID does not exist" containerID="d6a064c62809a4de969b28c31d3f235db37ecdc7f7e1a76c06e3d4543929357d" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.728594 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a064c62809a4de969b28c31d3f235db37ecdc7f7e1a76c06e3d4543929357d"} err="failed to get container status \"d6a064c62809a4de969b28c31d3f235db37ecdc7f7e1a76c06e3d4543929357d\": rpc error: code = NotFound desc = could not find container \"d6a064c62809a4de969b28c31d3f235db37ecdc7f7e1a76c06e3d4543929357d\": container with ID starting with d6a064c62809a4de969b28c31d3f235db37ecdc7f7e1a76c06e3d4543929357d not found: ID does not exist" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.728616 4765 scope.go:117] "RemoveContainer" containerID="5c4ae586aaf1ed88205caadbf4c1d649720102ae00810664a8643722efe0e550" Jan 21 13:06:31 crc kubenswrapper[4765]: E0121 13:06:31.728931 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c4ae586aaf1ed88205caadbf4c1d649720102ae00810664a8643722efe0e550\": container with ID starting with 5c4ae586aaf1ed88205caadbf4c1d649720102ae00810664a8643722efe0e550 not found: ID does not exist" containerID="5c4ae586aaf1ed88205caadbf4c1d649720102ae00810664a8643722efe0e550" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.728953 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c4ae586aaf1ed88205caadbf4c1d649720102ae00810664a8643722efe0e550"} err="failed to get container status \"5c4ae586aaf1ed88205caadbf4c1d649720102ae00810664a8643722efe0e550\": rpc error: code = NotFound desc = could not find container \"5c4ae586aaf1ed88205caadbf4c1d649720102ae00810664a8643722efe0e550\": container with ID starting with 5c4ae586aaf1ed88205caadbf4c1d649720102ae00810664a8643722efe0e550 not found: ID does not exist" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.746863 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c876b68-6eab-460d-983d-51514e30fbd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c876b68-6eab-460d-983d-51514e30fbd1" (UID: "0c876b68-6eab-460d-983d-51514e30fbd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.794066 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c876b68-6eab-460d-983d-51514e30fbd1-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.794105 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x59rb\" (UniqueName: \"kubernetes.io/projected/0c876b68-6eab-460d-983d-51514e30fbd1-kube-api-access-x59rb\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.794116 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c876b68-6eab-460d-983d-51514e30fbd1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.993152 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p4dt7"] Jan 21 13:06:31 crc kubenswrapper[4765]: I0121 13:06:31.996263 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p4dt7"] Jan 21 13:06:33 crc kubenswrapper[4765]: I0121 13:06:33.621520 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c876b68-6eab-460d-983d-51514e30fbd1" path="/var/lib/kubelet/pods/0c876b68-6eab-460d-983d-51514e30fbd1/volumes" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.993135 4765 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.993857 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e452dd-2411-4ffb-8b6a-fed70777e6fc" containerName="extract-content" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.993873 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e452dd-2411-4ffb-8b6a-fed70777e6fc" containerName="extract-content" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.993888 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde1e264-573c-4186-8b9b-a0cb024d5d91" containerName="extract-content" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.993896 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde1e264-573c-4186-8b9b-a0cb024d5d91" containerName="extract-content" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.993919 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e452dd-2411-4ffb-8b6a-fed70777e6fc" containerName="registry-server" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.993927 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e452dd-2411-4ffb-8b6a-fed70777e6fc" containerName="registry-server" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.993947 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c876b68-6eab-460d-983d-51514e30fbd1" containerName="extract-content" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.993955 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c876b68-6eab-460d-983d-51514e30fbd1" containerName="extract-content" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.993968 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c876b68-6eab-460d-983d-51514e30fbd1" containerName="registry-server" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.993977 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c876b68-6eab-460d-983d-51514e30fbd1" containerName="registry-server" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.993988 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db200a0-358b-415c-960e-cec8935a0435" containerName="extract-utilities" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.993995 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db200a0-358b-415c-960e-cec8935a0435" containerName="extract-utilities" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.994005 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e452dd-2411-4ffb-8b6a-fed70777e6fc" containerName="extract-utilities" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.994012 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e452dd-2411-4ffb-8b6a-fed70777e6fc" containerName="extract-utilities" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.994023 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c876b68-6eab-460d-983d-51514e30fbd1" containerName="extract-utilities" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.994030 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c876b68-6eab-460d-983d-51514e30fbd1" containerName="extract-utilities" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.994039 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5effab5-52b4-4fb7-bbe7-071784ce84d4" containerName="pruner" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.994046 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5effab5-52b4-4fb7-bbe7-071784ce84d4" containerName="pruner" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.994064 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db200a0-358b-415c-960e-cec8935a0435" containerName="registry-server" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.994072 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db200a0-358b-415c-960e-cec8935a0435" containerName="registry-server" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.994082 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde1e264-573c-4186-8b9b-a0cb024d5d91" containerName="extract-utilities" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.994089 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde1e264-573c-4186-8b9b-a0cb024d5d91" containerName="extract-utilities" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.994100 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde1e264-573c-4186-8b9b-a0cb024d5d91" containerName="registry-server" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.994107 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde1e264-573c-4186-8b9b-a0cb024d5d91" containerName="registry-server" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.994116 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db200a0-358b-415c-960e-cec8935a0435" containerName="extract-content" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.994123 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db200a0-358b-415c-960e-cec8935a0435" containerName="extract-content" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.994256 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c876b68-6eab-460d-983d-51514e30fbd1" containerName="registry-server" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.994267 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde1e264-573c-4186-8b9b-a0cb024d5d91" containerName="registry-server" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.994276 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e452dd-2411-4ffb-8b6a-fed70777e6fc" containerName="registry-server" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.994285 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5effab5-52b4-4fb7-bbe7-071784ce84d4" containerName="pruner" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.994293 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db200a0-358b-415c-960e-cec8935a0435" containerName="registry-server" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.994694 4765 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.994821 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995000 4765 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995196 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245" gracePeriod=15 Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995201 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511" gracePeriod=15 Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995302 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53" gracePeriod=15 Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995366 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b" gracePeriod=15 Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.995657 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995672 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.995682 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995690 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.995700 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995708 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.995718 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995727 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.995739 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995746 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.995758 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995767 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 13:06:34 crc kubenswrapper[4765]: E0121 13:06:34.995777 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995784 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995894 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995906 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995916 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995926 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995940 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995951 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 13:06:34 crc kubenswrapper[4765]: I0121 13:06:34.995292 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34" gracePeriod=15 Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.053276 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.145312 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.145376 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.145415 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.145560 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.145584 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.145619 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.145640 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.145875 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.246736 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.246821 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.246856 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.246885 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.246897 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.246955 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.247007 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.247016 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.247051 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.247080 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.247089 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.247104 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.247145 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.247162 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.247185 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.247259 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.344454 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:06:35 crc kubenswrapper[4765]: W0121 13:06:35.372264 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-735620fe2b91a418a3b25cfb173413d71f4e42d62ed0261e5250837a2b963b20 WatchSource:0}: Error finding container 735620fe2b91a418a3b25cfb173413d71f4e42d62ed0261e5250837a2b963b20: Status 404 returned error can't find the container with id 735620fe2b91a418a3b25cfb173413d71f4e42d62ed0261e5250837a2b963b20 Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.683331 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"735620fe2b91a418a3b25cfb173413d71f4e42d62ed0261e5250837a2b963b20"} Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.686293 4765 generic.go:334] "Generic (PLEG): container finished" podID="32a0b174-c516-4ed9-9204-e1f15dd18d59" containerID="47225b7789413a8ca919b146b40e0e567d06908c8eec9d82ac4af3b094846a93" exitCode=0 Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.686390 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"32a0b174-c516-4ed9-9204-e1f15dd18d59","Type":"ContainerDied","Data":"47225b7789413a8ca919b146b40e0e567d06908c8eec9d82ac4af3b094846a93"} Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.687586 4765 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.687968 4765 status_manager.go:851] "Failed to get status for pod" podUID="32a0b174-c516-4ed9-9204-e1f15dd18d59" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:35 crc kubenswrapper[4765]: E0121 13:06:35.689052 4765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.144:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cc0d9e269665a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 13:06:35.688756826 +0000 UTC m=+256.706482648,LastTimestamp:2026-01-21 13:06:35.688756826 +0000 UTC m=+256.706482648,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.692563 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.693821 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.694514 4765 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245" exitCode=0 Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.694543 4765 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511" exitCode=0 Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.694554 4765 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34" exitCode=0 Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.694563 4765 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53" exitCode=2 Jan 21 13:06:35 crc kubenswrapper[4765]: I0121 13:06:35.694611 4765 scope.go:117] "RemoveContainer" containerID="691f48117d7a4377b427d3fcda6cc8485580e46986d7ece326e012de5b89a9d5" Jan 21 13:06:36 crc kubenswrapper[4765]: E0121 13:06:36.048398 4765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.144:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cc0d9e269665a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 13:06:35.688756826 +0000 UTC m=+256.706482648,LastTimestamp:2026-01-21 13:06:35.688756826 +0000 UTC m=+256.706482648,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 13:06:36 crc kubenswrapper[4765]: E0121 13:06:36.513062 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:06:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:06:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:06:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:06:36Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:020b5bee2bbd09fbf64a1af808628bb76e9c70b9efdc49f38e5a50641590514c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:78f8ee56f09c047b3acd7e5b6b8a0f9534952f418b658c9f5a6d45d12546e67c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1670570239},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:985a76d8ebbdf8ece24003afb1d6ad0bf3e155bd005676f602d7f97cdad463c1\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:a52c9b1b8a47036a88322e4db1511ead83746d3ba41ce098059642099a09525e\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202798827},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:2b72e40c5d5b36b681f40c16ebf3dcac6520ed0c79f174ba87f673ab7afd209a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:d83ee77ad07e06451a84205ac4c85c69e912a1c975e1a8a95095d79218028dce\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:aae73aa11d44b8831c829464aa5515a56a9a8ef17926d54a010e0e9215ecd643\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cd24673e95503ac856405941c96e75f11ca6da85fe80950e0dd00bb1062f9f47\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1166891762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:36 crc kubenswrapper[4765]: E0121 13:06:36.513787 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:36 crc kubenswrapper[4765]: E0121 13:06:36.514009 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:36 crc kubenswrapper[4765]: E0121 13:06:36.514177 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:36 crc kubenswrapper[4765]: E0121 13:06:36.514370 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:36 crc kubenswrapper[4765]: E0121 13:06:36.514386 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:06:36 crc kubenswrapper[4765]: I0121 13:06:36.702198 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7ce4552dff00d86594c37e0f77c3cf45aa3bcc555401456b5ff0411bc246106a"} Jan 21 13:06:36 crc kubenswrapper[4765]: I0121 13:06:36.703355 4765 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:36 crc kubenswrapper[4765]: I0121 13:06:36.703605 4765 status_manager.go:851] "Failed to get status for pod" podUID="32a0b174-c516-4ed9-9204-e1f15dd18d59" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:36 crc kubenswrapper[4765]: I0121 13:06:36.705766 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 13:06:36 crc kubenswrapper[4765]: I0121 13:06:36.935037 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:06:36 crc kubenswrapper[4765]: I0121 13:06:36.935955 4765 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:36 crc kubenswrapper[4765]: I0121 13:06:36.936572 4765 status_manager.go:851] "Failed to get status for pod" podUID="32a0b174-c516-4ed9-9204-e1f15dd18d59" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.082004 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32a0b174-c516-4ed9-9204-e1f15dd18d59-kube-api-access\") pod \"32a0b174-c516-4ed9-9204-e1f15dd18d59\" (UID: \"32a0b174-c516-4ed9-9204-e1f15dd18d59\") " Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.082120 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32a0b174-c516-4ed9-9204-e1f15dd18d59-kubelet-dir\") pod \"32a0b174-c516-4ed9-9204-e1f15dd18d59\" (UID: \"32a0b174-c516-4ed9-9204-e1f15dd18d59\") " Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.082251 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32a0b174-c516-4ed9-9204-e1f15dd18d59-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "32a0b174-c516-4ed9-9204-e1f15dd18d59" (UID: "32a0b174-c516-4ed9-9204-e1f15dd18d59"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.082264 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32a0b174-c516-4ed9-9204-e1f15dd18d59-var-lock\") pod \"32a0b174-c516-4ed9-9204-e1f15dd18d59\" (UID: \"32a0b174-c516-4ed9-9204-e1f15dd18d59\") " Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.082289 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32a0b174-c516-4ed9-9204-e1f15dd18d59-var-lock" (OuterVolumeSpecName: "var-lock") pod "32a0b174-c516-4ed9-9204-e1f15dd18d59" (UID: "32a0b174-c516-4ed9-9204-e1f15dd18d59"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.082542 4765 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32a0b174-c516-4ed9-9204-e1f15dd18d59-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.082556 4765 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32a0b174-c516-4ed9-9204-e1f15dd18d59-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.092790 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a0b174-c516-4ed9-9204-e1f15dd18d59-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "32a0b174-c516-4ed9-9204-e1f15dd18d59" (UID: "32a0b174-c516-4ed9-9204-e1f15dd18d59"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.184090 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32a0b174-c516-4ed9-9204-e1f15dd18d59-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.714339 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.714333 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"32a0b174-c516-4ed9-9204-e1f15dd18d59","Type":"ContainerDied","Data":"69a531db6890734e3191bcde1e91fd789d8061bdde28677c690be81a77d8af27"} Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.715072 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69a531db6890734e3191bcde1e91fd789d8061bdde28677c690be81a77d8af27" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.719109 4765 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.719482 4765 status_manager.go:851] "Failed to get status for pod" podUID="32a0b174-c516-4ed9-9204-e1f15dd18d59" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.851613 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.852231 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.853252 4765 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.853524 4765 status_manager.go:851] "Failed to get status for pod" podUID="32a0b174-c516-4ed9-9204-e1f15dd18d59" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.853765 4765 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.996963 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.997069 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.997113 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.997158 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.997268 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.997265 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.997546 4765 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.997567 4765 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:37 crc kubenswrapper[4765]: I0121 13:06:37.997578 4765 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.723298 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.725809 4765 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b" exitCode=0 Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.725881 4765 scope.go:117] "RemoveContainer" containerID="261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.726056 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.742037 4765 status_manager.go:851] "Failed to get status for pod" podUID="32a0b174-c516-4ed9-9204-e1f15dd18d59" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.743453 4765 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.743864 4765 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.753924 4765 scope.go:117] "RemoveContainer" containerID="7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.780396 4765 scope.go:117] "RemoveContainer" containerID="ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.795933 4765 scope.go:117] "RemoveContainer" containerID="58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.812534 4765 scope.go:117] "RemoveContainer" containerID="1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.836697 4765 scope.go:117] "RemoveContainer" containerID="a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.873281 4765 scope.go:117] "RemoveContainer" containerID="261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245" Jan 21 13:06:38 crc kubenswrapper[4765]: E0121 13:06:38.879514 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\": container with ID starting with 261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245 not found: ID does not exist" containerID="261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.879594 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245"} err="failed to get container status \"261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\": rpc error: code = NotFound desc = could not find container \"261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245\": container with ID starting with 261970f9b8f4e7ab205dc1b5b1d75f8745c928023032e43f2650fa08f4e6b245 not found: ID does not exist" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.879670 4765 scope.go:117] "RemoveContainer" containerID="7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511" Jan 21 13:06:38 crc kubenswrapper[4765]: E0121 13:06:38.881725 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\": container with ID starting with 7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511 not found: ID does not exist" containerID="7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.881759 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511"} err="failed to get container status \"7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\": rpc error: code = NotFound desc = could not find container \"7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511\": container with ID starting with 7b0e9c476818c8a47f160ca3e716c292fe02b3944ebf92255e0d02e3dea42511 not found: ID does not exist" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.881790 4765 scope.go:117] "RemoveContainer" containerID="ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34" Jan 21 13:06:38 crc kubenswrapper[4765]: E0121 13:06:38.882255 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\": container with ID starting with ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34 not found: ID does not exist" containerID="ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.882281 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34"} err="failed to get container status \"ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\": rpc error: code = NotFound desc = could not find container \"ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34\": container with ID starting with ebd952892faea9f794a492177b451dd7efb8a648cbdab8418eca5df217611d34 not found: ID does not exist" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.882298 4765 scope.go:117] "RemoveContainer" containerID="58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53" Jan 21 13:06:38 crc kubenswrapper[4765]: E0121 13:06:38.882553 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\": container with ID starting with 58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53 not found: ID does not exist" containerID="58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.882575 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53"} err="failed to get container status \"58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\": rpc error: code = NotFound desc = could not find container \"58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53\": container with ID starting with 58034efbc47dc0645935c2143742b3d1c5a94e7196a0170258208602538b0b53 not found: ID does not exist" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.882593 4765 scope.go:117] "RemoveContainer" containerID="1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b" Jan 21 13:06:38 crc kubenswrapper[4765]: E0121 13:06:38.882894 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\": container with ID starting with 1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b not found: ID does not exist" containerID="1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.882925 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b"} err="failed to get container status \"1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\": rpc error: code = NotFound desc = could not find container \"1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b\": container with ID starting with 1379aa6b56207613c300bc74b2bc0624f0a8b859f3c008a1f997ec26e8d1946b not found: ID does not exist" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.882960 4765 scope.go:117] "RemoveContainer" containerID="a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c" Jan 21 13:06:38 crc kubenswrapper[4765]: E0121 13:06:38.883943 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\": container with ID starting with a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c not found: ID does not exist" containerID="a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c" Jan 21 13:06:38 crc kubenswrapper[4765]: I0121 13:06:38.883999 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c"} err="failed to get container status \"a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\": rpc error: code = NotFound desc = could not find container \"a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c\": container with ID starting with a1e2cd6b911ac0886847513fb2a717820accf12170dfd89b8bb8e0d42bce6f3c not found: ID does not exist" Jan 21 13:06:39 crc kubenswrapper[4765]: I0121 13:06:39.616882 4765 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:39 crc kubenswrapper[4765]: I0121 13:06:39.617465 4765 status_manager.go:851] "Failed to get status for pod" podUID="32a0b174-c516-4ed9-9204-e1f15dd18d59" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:39 crc kubenswrapper[4765]: I0121 13:06:39.618418 4765 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:39 crc kubenswrapper[4765]: I0121 13:06:39.621642 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 13:06:45 crc kubenswrapper[4765]: E0121 13:06:45.368656 4765 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:45 crc kubenswrapper[4765]: E0121 13:06:45.369320 4765 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:45 crc kubenswrapper[4765]: E0121 13:06:45.369494 4765 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:45 crc kubenswrapper[4765]: E0121 13:06:45.369657 4765 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:45 crc kubenswrapper[4765]: E0121 13:06:45.369849 4765 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:45 crc kubenswrapper[4765]: I0121 13:06:45.369873 4765 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 13:06:45 crc kubenswrapper[4765]: E0121 13:06:45.370044 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" interval="200ms" Jan 21 13:06:45 crc kubenswrapper[4765]: E0121 13:06:45.571294 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" interval="400ms" Jan 21 13:06:45 crc kubenswrapper[4765]: E0121 13:06:45.972875 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" interval="800ms" Jan 21 13:06:46 crc kubenswrapper[4765]: E0121 13:06:46.050286 4765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.144:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cc0d9e269665a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 13:06:35.688756826 +0000 UTC m=+256.706482648,LastTimestamp:2026-01-21 13:06:35.688756826 +0000 UTC m=+256.706482648,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 13:06:46 crc kubenswrapper[4765]: I0121 13:06:46.613877 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:46 crc kubenswrapper[4765]: I0121 13:06:46.615005 4765 status_manager.go:851] "Failed to get status for pod" podUID="32a0b174-c516-4ed9-9204-e1f15dd18d59" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:46 crc kubenswrapper[4765]: I0121 13:06:46.615679 4765 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:46 crc kubenswrapper[4765]: I0121 13:06:46.632579 4765 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1981c521-dcec-4302-b34b-4464c8ebf331" Jan 21 13:06:46 crc kubenswrapper[4765]: I0121 13:06:46.632619 4765 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1981c521-dcec-4302-b34b-4464c8ebf331" Jan 21 13:06:46 crc kubenswrapper[4765]: E0121 13:06:46.633292 4765 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:46 crc kubenswrapper[4765]: I0121 13:06:46.633987 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:46 crc kubenswrapper[4765]: E0121 13:06:46.642047 4765 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.144:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" volumeName="registry-storage" Jan 21 13:06:46 crc kubenswrapper[4765]: E0121 13:06:46.773677 4765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" interval="1.6s" Jan 21 13:06:46 crc kubenswrapper[4765]: I0121 13:06:46.775351 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4b515d6d116a90fb5f0e54d4dfcf633c8f1aede06a77a28d01c493b49b374086"} Jan 21 13:06:46 crc kubenswrapper[4765]: E0121 13:06:46.831290 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:06:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:06:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:06:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T13:06:46Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:020b5bee2bbd09fbf64a1af808628bb76e9c70b9efdc49f38e5a50641590514c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:78f8ee56f09c047b3acd7e5b6b8a0f9534952f418b658c9f5a6d45d12546e67c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1670570239},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:985a76d8ebbdf8ece24003afb1d6ad0bf3e155bd005676f602d7f97cdad463c1\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:a52c9b1b8a47036a88322e4db1511ead83746d3ba41ce098059642099a09525e\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202798827},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:2b72e40c5d5b36b681f40c16ebf3dcac6520ed0c79f174ba87f673ab7afd209a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:d83ee77ad07e06451a84205ac4c85c69e912a1c975e1a8a95095d79218028dce\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:aae73aa11d44b8831c829464aa5515a56a9a8ef17926d54a010e0e9215ecd643\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cd24673e95503ac856405941c96e75f11ca6da85fe80950e0dd00bb1062f9f47\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1166891762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:46 crc kubenswrapper[4765]: E0121 13:06:46.831793 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:46 crc kubenswrapper[4765]: E0121 13:06:46.832094 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:46 crc kubenswrapper[4765]: E0121 13:06:46.832663 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:46 crc kubenswrapper[4765]: E0121 13:06:46.833396 4765 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:46 crc kubenswrapper[4765]: E0121 13:06:46.833432 4765 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 13:06:47 crc kubenswrapper[4765]: I0121 13:06:47.787872 4765 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="99d17ba6d49959922deff925180560fcb000eeaf88b5c64a860146c4a4a160b4" exitCode=0 Jan 21 13:06:47 crc kubenswrapper[4765]: I0121 13:06:47.787962 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"99d17ba6d49959922deff925180560fcb000eeaf88b5c64a860146c4a4a160b4"} Jan 21 13:06:47 crc kubenswrapper[4765]: I0121 13:06:47.789545 4765 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1981c521-dcec-4302-b34b-4464c8ebf331" Jan 21 13:06:47 crc kubenswrapper[4765]: I0121 13:06:47.789568 4765 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1981c521-dcec-4302-b34b-4464c8ebf331" Jan 21 13:06:47 crc kubenswrapper[4765]: E0121 13:06:47.790282 4765 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:47 crc kubenswrapper[4765]: I0121 13:06:47.790409 4765 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:47 crc kubenswrapper[4765]: I0121 13:06:47.790617 4765 status_manager.go:851] "Failed to get status for pod" podUID="32a0b174-c516-4ed9-9204-e1f15dd18d59" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.144:6443: connect: connection refused" Jan 21 13:06:48 crc kubenswrapper[4765]: I0121 13:06:48.798405 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"29b0bab6a2770c1158479855e9096bae4564adfe9839e78e68e2e6872d90f98c"} Jan 21 13:06:48 crc kubenswrapper[4765]: I0121 13:06:48.798718 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f0040848b83df7bfd27e7ba57cb828e83105a2f1038e35e6c44d35edf42da087"} Jan 21 13:06:48 crc kubenswrapper[4765]: I0121 13:06:48.798728 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"549280663a80304ed88dc94b788f4b80e6972877d97129d43311170ebfa0ff5f"} Jan 21 13:06:48 crc kubenswrapper[4765]: I0121 13:06:48.798743 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"12bd345ee0931e7f0fde867d08f8a415ec039e731d04694374db4d55a2edbbf6"} Jan 21 13:06:48 crc kubenswrapper[4765]: I0121 13:06:48.805988 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 13:06:48 crc kubenswrapper[4765]: I0121 13:06:48.806038 4765 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91" exitCode=1 Jan 21 13:06:48 crc kubenswrapper[4765]: I0121 13:06:48.806075 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91"} Jan 21 13:06:48 crc kubenswrapper[4765]: I0121 13:06:48.806576 4765 scope.go:117] "RemoveContainer" containerID="cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91" Jan 21 13:06:49 crc kubenswrapper[4765]: I0121 13:06:49.816031 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8cf03d1966a01480a78ade1db6eb11aebded1f6d591926548dfe07d64737077e"} Jan 21 13:06:49 crc kubenswrapper[4765]: I0121 13:06:49.817411 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:49 crc kubenswrapper[4765]: I0121 13:06:49.816448 4765 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1981c521-dcec-4302-b34b-4464c8ebf331" Jan 21 13:06:49 crc kubenswrapper[4765]: I0121 13:06:49.817609 4765 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1981c521-dcec-4302-b34b-4464c8ebf331" Jan 21 13:06:49 crc kubenswrapper[4765]: I0121 13:06:49.819200 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 13:06:49 crc kubenswrapper[4765]: I0121 13:06:49.819329 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bdcb8c297cbd6fda01e719c58cef4ef067896dbd93516b16f73b6e62d1ad8fe2"} Jan 21 13:06:50 crc kubenswrapper[4765]: I0121 13:06:50.320766 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:06:50 crc kubenswrapper[4765]: I0121 13:06:50.320828 4765 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 13:06:50 crc kubenswrapper[4765]: I0121 13:06:50.320896 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.634536 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.634610 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.641574 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.647646 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.647950 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.648107 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.648397 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.651347 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.651374 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.651354 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.659743 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.660653 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.670160 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.674781 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.675971 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.833591 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.841101 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:06:51 crc kubenswrapper[4765]: I0121 13:06:51.962415 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 13:06:52 crc kubenswrapper[4765]: W0121 13:06:52.343355 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-72e0677d56e7c6e34c1eacda2f140f62cb147978218ebbc5fe7880db81eff2cd WatchSource:0}: Error finding container 72e0677d56e7c6e34c1eacda2f140f62cb147978218ebbc5fe7880db81eff2cd: Status 404 returned error can't find the container with id 72e0677d56e7c6e34c1eacda2f140f62cb147978218ebbc5fe7880db81eff2cd Jan 21 13:06:52 crc kubenswrapper[4765]: W0121 13:06:52.420708 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-0d06d6ce265754dfc6a956b7986446ab48fdb89ef35577426da065005d56466c WatchSource:0}: Error finding container 0d06d6ce265754dfc6a956b7986446ab48fdb89ef35577426da065005d56466c: Status 404 returned error can't find the container with id 0d06d6ce265754dfc6a956b7986446ab48fdb89ef35577426da065005d56466c Jan 21 13:06:52 crc kubenswrapper[4765]: I0121 13:06:52.839600 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a4be030771b45b28642276306633976e190d39ccffb5ee2959d4201555ed1e58"} Jan 21 13:06:52 crc kubenswrapper[4765]: I0121 13:06:52.841774 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"98f24884bf7cca2a82c0fc2f56a3963a2c82720096fca97db256c08b59388f7d"} Jan 21 13:06:52 crc kubenswrapper[4765]: I0121 13:06:52.842329 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"adc3ba6cf6d3cf378801a0f6d7968bf6f4a6e6c5af6308d9fc01ebb5c282ab96"} Jan 21 13:06:52 crc kubenswrapper[4765]: I0121 13:06:52.842398 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0d06d6ce265754dfc6a956b7986446ab48fdb89ef35577426da065005d56466c"} Jan 21 13:06:52 crc kubenswrapper[4765]: I0121 13:06:52.843864 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"10c0b04193b026bba1606219e9b5a2e12ac989e5a4eef8ffe4db271078d1c76f"} Jan 21 13:06:52 crc kubenswrapper[4765]: I0121 13:06:52.843904 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"72e0677d56e7c6e34c1eacda2f140f62cb147978218ebbc5fe7880db81eff2cd"} Jan 21 13:06:52 crc kubenswrapper[4765]: I0121 13:06:52.844322 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:06:53 crc kubenswrapper[4765]: I0121 13:06:53.852312 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Jan 21 13:06:53 crc kubenswrapper[4765]: I0121 13:06:53.852674 4765 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="a4be030771b45b28642276306633976e190d39ccffb5ee2959d4201555ed1e58" exitCode=255 Jan 21 13:06:53 crc kubenswrapper[4765]: I0121 13:06:53.852856 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"a4be030771b45b28642276306633976e190d39ccffb5ee2959d4201555ed1e58"} Jan 21 13:06:53 crc kubenswrapper[4765]: I0121 13:06:53.853648 4765 scope.go:117] "RemoveContainer" containerID="a4be030771b45b28642276306633976e190d39ccffb5ee2959d4201555ed1e58" Jan 21 13:06:54 crc kubenswrapper[4765]: I0121 13:06:54.855567 4765 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:06:54 crc kubenswrapper[4765]: I0121 13:06:54.860232 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 21 13:06:54 crc kubenswrapper[4765]: I0121 13:06:54.861200 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Jan 21 13:06:54 crc kubenswrapper[4765]: I0121 13:06:54.861431 4765 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="1f372141b14f18f7a800096f595a35eccb90fc0a7b37bf7f8343d3e1416c1393" exitCode=255 Jan 21 13:06:54 crc kubenswrapper[4765]: I0121 13:06:54.861533 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"1f372141b14f18f7a800096f595a35eccb90fc0a7b37bf7f8343d3e1416c1393"} Jan 21 13:06:54 crc kubenswrapper[4765]: I0121 13:06:54.861610 4765 scope.go:117] "RemoveContainer" containerID="a4be030771b45b28642276306633976e190d39ccffb5ee2959d4201555ed1e58" Jan 21 13:06:54 crc kubenswrapper[4765]: I0121 13:06:54.862112 4765 scope.go:117] "RemoveContainer" containerID="1f372141b14f18f7a800096f595a35eccb90fc0a7b37bf7f8343d3e1416c1393" Jan 21 13:06:54 crc kubenswrapper[4765]: E0121 13:06:54.862508 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:06:54 crc kubenswrapper[4765]: I0121 13:06:54.951691 4765 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="47161185-cdbc-4d58-85dd-3dee75b83989" Jan 21 13:06:55 crc kubenswrapper[4765]: I0121 13:06:55.871159 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 21 13:06:55 crc kubenswrapper[4765]: I0121 13:06:55.871827 4765 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1981c521-dcec-4302-b34b-4464c8ebf331" Jan 21 13:06:55 crc kubenswrapper[4765]: I0121 13:06:55.871869 4765 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1981c521-dcec-4302-b34b-4464c8ebf331" Jan 21 13:06:55 crc kubenswrapper[4765]: I0121 13:06:55.872009 4765 scope.go:117] "RemoveContainer" containerID="1f372141b14f18f7a800096f595a35eccb90fc0a7b37bf7f8343d3e1416c1393" Jan 21 13:06:55 crc kubenswrapper[4765]: E0121 13:06:55.872325 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:06:55 crc kubenswrapper[4765]: I0121 13:06:55.875247 4765 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="47161185-cdbc-4d58-85dd-3dee75b83989" Jan 21 13:06:56 crc kubenswrapper[4765]: I0121 13:06:56.869000 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:07:00 crc kubenswrapper[4765]: I0121 13:07:00.320993 4765 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 13:07:00 crc kubenswrapper[4765]: I0121 13:07:00.321345 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 13:07:04 crc kubenswrapper[4765]: I0121 13:07:04.915330 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 13:07:05 crc kubenswrapper[4765]: I0121 13:07:05.056010 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 13:07:06 crc kubenswrapper[4765]: I0121 13:07:06.139336 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 13:07:06 crc kubenswrapper[4765]: I0121 13:07:06.235781 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 13:07:06 crc kubenswrapper[4765]: I0121 13:07:06.492920 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 13:07:06 crc kubenswrapper[4765]: I0121 13:07:06.700829 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 13:07:06 crc kubenswrapper[4765]: I0121 13:07:06.875527 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 13:07:07 crc kubenswrapper[4765]: I0121 13:07:07.003978 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 13:07:07 crc kubenswrapper[4765]: I0121 13:07:07.278309 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 13:07:07 crc kubenswrapper[4765]: I0121 13:07:07.492174 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 13:07:07 crc kubenswrapper[4765]: I0121 13:07:07.613932 4765 scope.go:117] "RemoveContainer" containerID="1f372141b14f18f7a800096f595a35eccb90fc0a7b37bf7f8343d3e1416c1393" Jan 21 13:07:07 crc kubenswrapper[4765]: I0121 13:07:07.958359 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 21 13:07:07 crc kubenswrapper[4765]: I0121 13:07:07.958770 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"fd7256b572d35d51a89a26a9d809e66ddaef4668fab63ce1fc2ed58bfc6736f7"} Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.107312 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.140457 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.249863 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.249867 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.250425 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.310632 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.416473 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.418682 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.436103 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.602813 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.767892 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.828340 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.967982 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.968992 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.969064 4765 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="fd7256b572d35d51a89a26a9d809e66ddaef4668fab63ce1fc2ed58bfc6736f7" exitCode=255 Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.969112 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"fd7256b572d35d51a89a26a9d809e66ddaef4668fab63ce1fc2ed58bfc6736f7"} Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.969156 4765 scope.go:117] "RemoveContainer" containerID="1f372141b14f18f7a800096f595a35eccb90fc0a7b37bf7f8343d3e1416c1393" Jan 21 13:07:08 crc kubenswrapper[4765]: I0121 13:07:08.973524 4765 scope.go:117] "RemoveContainer" containerID="fd7256b572d35d51a89a26a9d809e66ddaef4668fab63ce1fc2ed58bfc6736f7" Jan 21 13:07:08 crc kubenswrapper[4765]: E0121 13:07:08.974036 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:07:09 crc kubenswrapper[4765]: I0121 13:07:09.395327 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 13:07:09 crc kubenswrapper[4765]: I0121 13:07:09.404425 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 13:07:09 crc kubenswrapper[4765]: I0121 13:07:09.442532 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 13:07:09 crc kubenswrapper[4765]: I0121 13:07:09.445803 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 13:07:09 crc kubenswrapper[4765]: I0121 13:07:09.497969 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 13:07:09 crc kubenswrapper[4765]: I0121 13:07:09.521858 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 13:07:09 crc kubenswrapper[4765]: I0121 13:07:09.669897 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 13:07:09 crc kubenswrapper[4765]: I0121 13:07:09.700034 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 13:07:09 crc kubenswrapper[4765]: I0121 13:07:09.705761 4765 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 13:07:09 crc kubenswrapper[4765]: I0121 13:07:09.878935 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 13:07:09 crc kubenswrapper[4765]: I0121 13:07:09.895083 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 13:07:09 crc kubenswrapper[4765]: I0121 13:07:09.901238 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 13:07:09 crc kubenswrapper[4765]: I0121 13:07:09.931967 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 13:07:09 crc kubenswrapper[4765]: I0121 13:07:09.976193 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.052333 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.109881 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.120536 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.162568 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.176024 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.222563 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.279914 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.287472 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.310445 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.320156 4765 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.320238 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.320304 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.321131 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"bdcb8c297cbd6fda01e719c58cef4ef067896dbd93516b16f73b6e62d1ad8fe2"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.321382 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://bdcb8c297cbd6fda01e719c58cef4ef067896dbd93516b16f73b6e62d1ad8fe2" gracePeriod=30 Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.336965 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.364202 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.367333 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.383410 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.441654 4765 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.482874 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.527553 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.532345 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.544421 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.603091 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.699361 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.842781 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.879377 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 13:07:10 crc kubenswrapper[4765]: I0121 13:07:10.970073 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.073655 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.073921 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.085848 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.202406 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.243517 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.250325 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.276166 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.362811 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.389897 4765 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.393953 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.427119 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.710621 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.724593 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.841675 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.857442 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.875108 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.928560 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 13:07:11 crc kubenswrapper[4765]: I0121 13:07:11.969609 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.084817 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.125087 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.171775 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.227884 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.244993 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.268719 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.324801 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.334850 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.396372 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.550077 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.583266 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.617417 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.635676 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.762922 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.817026 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.850304 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.860865 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 13:07:12 crc kubenswrapper[4765]: I0121 13:07:12.978372 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 13:07:13 crc kubenswrapper[4765]: I0121 13:07:13.077355 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 13:07:13 crc kubenswrapper[4765]: I0121 13:07:13.095105 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 13:07:13 crc kubenswrapper[4765]: I0121 13:07:13.172260 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 13:07:13 crc kubenswrapper[4765]: I0121 13:07:13.223091 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 13:07:13 crc kubenswrapper[4765]: I0121 13:07:13.252181 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 13:07:13 crc kubenswrapper[4765]: I0121 13:07:13.280893 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 13:07:13 crc kubenswrapper[4765]: I0121 13:07:13.418978 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 13:07:13 crc kubenswrapper[4765]: I0121 13:07:13.437024 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 13:07:13 crc kubenswrapper[4765]: I0121 13:07:13.532608 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 13:07:13 crc kubenswrapper[4765]: I0121 13:07:13.532628 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 13:07:13 crc kubenswrapper[4765]: I0121 13:07:13.532838 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 13:07:13 crc kubenswrapper[4765]: I0121 13:07:13.622564 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 13:07:13 crc kubenswrapper[4765]: I0121 13:07:13.718789 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 13:07:13 crc kubenswrapper[4765]: I0121 13:07:13.882070 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 13:07:13 crc kubenswrapper[4765]: I0121 13:07:13.933017 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 13:07:14 crc kubenswrapper[4765]: I0121 13:07:14.019895 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 13:07:14 crc kubenswrapper[4765]: I0121 13:07:14.236665 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 13:07:14 crc kubenswrapper[4765]: I0121 13:07:14.246772 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 13:07:14 crc kubenswrapper[4765]: I0121 13:07:14.268297 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 13:07:14 crc kubenswrapper[4765]: I0121 13:07:14.383771 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 13:07:14 crc kubenswrapper[4765]: I0121 13:07:14.390877 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 13:07:14 crc kubenswrapper[4765]: I0121 13:07:14.562735 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 13:07:14 crc kubenswrapper[4765]: I0121 13:07:14.782199 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 13:07:14 crc kubenswrapper[4765]: I0121 13:07:14.947801 4765 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 13:07:14 crc kubenswrapper[4765]: I0121 13:07:14.982880 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.012226 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.018922 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.211510 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.217775 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.408998 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.445852 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.467902 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.510748 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.522450 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.621565 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.648399 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.679904 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.730424 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.761847 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.788182 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.825799 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.831000 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.839613 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.858172 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.884563 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 13:07:15 crc kubenswrapper[4765]: I0121 13:07:15.938963 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.107846 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.228524 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.374614 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.374686 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.428409 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.472733 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.600181 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.633520 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.650664 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.745663 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.756304 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.766516 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.783116 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.794932 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.846572 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.861950 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 13:07:16 crc kubenswrapper[4765]: I0121 13:07:16.929034 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.104427 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.117537 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.131875 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.159021 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.217098 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.284728 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.466133 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.469804 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.480445 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.502775 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.506147 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.538367 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.551267 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.796089 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.798592 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.824689 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.826477 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.826705 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.849806 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.913700 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.946150 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 13:07:17 crc kubenswrapper[4765]: I0121 13:07:17.996612 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.078163 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.155545 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.178801 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.187245 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.233312 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.260322 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.263536 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.269372 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.362245 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.426018 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.441190 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.472513 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.503124 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.506329 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.605867 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.652081 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.680412 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.731399 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.809242 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.811825 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.865249 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.871512 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.880664 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 13:07:18 crc kubenswrapper[4765]: I0121 13:07:18.942929 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.032045 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.043614 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.096778 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.144920 4765 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.231017 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.320261 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.338488 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.365321 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.398615 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.572664 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.605453 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.624523 4765 scope.go:117] "RemoveContainer" containerID="fd7256b572d35d51a89a26a9d809e66ddaef4668fab63ce1fc2ed58bfc6736f7" Jan 21 13:07:19 crc kubenswrapper[4765]: E0121 13:07:19.625296 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.689853 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.693172 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.959065 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.985517 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 13:07:19 crc kubenswrapper[4765]: I0121 13:07:19.995534 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 13:07:20 crc kubenswrapper[4765]: I0121 13:07:20.029416 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 13:07:20 crc kubenswrapper[4765]: I0121 13:07:20.192032 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 13:07:20 crc kubenswrapper[4765]: I0121 13:07:20.350834 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 13:07:20 crc kubenswrapper[4765]: I0121 13:07:20.353628 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 13:07:20 crc kubenswrapper[4765]: I0121 13:07:20.520188 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 13:07:20 crc kubenswrapper[4765]: I0121 13:07:20.637518 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 13:07:20 crc kubenswrapper[4765]: I0121 13:07:20.736463 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 13:07:20 crc kubenswrapper[4765]: I0121 13:07:20.775723 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 13:07:20 crc kubenswrapper[4765]: I0121 13:07:20.786643 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 13:07:20 crc kubenswrapper[4765]: I0121 13:07:20.952783 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 13:07:20 crc kubenswrapper[4765]: I0121 13:07:20.974427 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.048796 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.232079 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.234529 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.333328 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.410091 4765 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.418314 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=46.41826182 podStartE2EDuration="46.41826182s" podCreationTimestamp="2026-01-21 13:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:06:54.730384111 +0000 UTC m=+275.748109953" watchObservedRunningTime="2026-01-21 13:07:21.41826182 +0000 UTC m=+302.435987642" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.420473 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.420615 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.420718 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwg7s","openshift-marketplace/community-operators-8pg48","openshift-marketplace/redhat-operators-x7f8m","openshift-marketplace/marketplace-operator-79b997595-dzwvz","openshift-marketplace/certified-operators-csdrp"] Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.421018 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-csdrp" podUID="8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" containerName="registry-server" containerID="cri-o://58c292d8ab3268cd28e5b8aecb339e54f89484da567fe9425ca52492b24f2b5d" gracePeriod=30 Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.421122 4765 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1981c521-dcec-4302-b34b-4464c8ebf331" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.424175 4765 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1981c521-dcec-4302-b34b-4464c8ebf331" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.421379 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" podUID="de62a4d5-de79-4ad5-983d-7071fb85dce8" containerName="marketplace-operator" containerID="cri-o://46eb260759cded0c901a66b6878cac473a9c57ad591f9fb26605fa55db48b36e" gracePeriod=30 Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.422273 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zwg7s" podUID="4bd12a18-d34b-4d96-9409-f26a13dc93f5" containerName="registry-server" containerID="cri-o://c6105f3175bbb9953416d8896274989977f2a79bfa64ab1de1508c18fa4d803f" gracePeriod=30 Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.422420 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8pg48" podUID="1370386e-d1d5-471c-a3cc-fcbc7649a549" containerName="registry-server" containerID="cri-o://104973b87a05e1b4152e671cd38eaeeae50bba60b0c523833591131d44ae49d6" gracePeriod=30 Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.422026 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x7f8m" podUID="080522e6-050a-4df7-afe5-2476e455e157" containerName="registry-server" containerID="cri-o://498e752f0468710b1205a2632de0e291557e6ba1713647312e5db1c4642c6f48" gracePeriod=30 Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.428454 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.431501 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.507824 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=27.507791989 podStartE2EDuration="27.507791989s" podCreationTimestamp="2026-01-21 13:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:07:21.463597562 +0000 UTC m=+302.481323384" watchObservedRunningTime="2026-01-21 13:07:21.507791989 +0000 UTC m=+302.525517811" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.545125 4765 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.561906 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.730872 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.821275 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.849963 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.896107 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:07:21 crc kubenswrapper[4765]: I0121 13:07:21.971457 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.029173 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.029373 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.031104 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1370386e-d1d5-471c-a3cc-fcbc7649a549-catalog-content\") pod \"1370386e-d1d5-471c-a3cc-fcbc7649a549\" (UID: \"1370386e-d1d5-471c-a3cc-fcbc7649a549\") " Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.031183 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1370386e-d1d5-471c-a3cc-fcbc7649a549-utilities\") pod \"1370386e-d1d5-471c-a3cc-fcbc7649a549\" (UID: \"1370386e-d1d5-471c-a3cc-fcbc7649a549\") " Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.031194 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.031345 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq5tl\" (UniqueName: \"kubernetes.io/projected/1370386e-d1d5-471c-a3cc-fcbc7649a549-kube-api-access-qq5tl\") pod \"1370386e-d1d5-471c-a3cc-fcbc7649a549\" (UID: \"1370386e-d1d5-471c-a3cc-fcbc7649a549\") " Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.031411 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-operator-metrics\") pod \"de62a4d5-de79-4ad5-983d-7071fb85dce8\" (UID: \"de62a4d5-de79-4ad5-983d-7071fb85dce8\") " Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.031489 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-trusted-ca\") pod \"de62a4d5-de79-4ad5-983d-7071fb85dce8\" (UID: \"de62a4d5-de79-4ad5-983d-7071fb85dce8\") " Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.031508 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5bq2\" (UniqueName: \"kubernetes.io/projected/de62a4d5-de79-4ad5-983d-7071fb85dce8-kube-api-access-f5bq2\") pod \"de62a4d5-de79-4ad5-983d-7071fb85dce8\" (UID: \"de62a4d5-de79-4ad5-983d-7071fb85dce8\") " Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.032598 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1370386e-d1d5-471c-a3cc-fcbc7649a549-utilities" (OuterVolumeSpecName: "utilities") pod "1370386e-d1d5-471c-a3cc-fcbc7649a549" (UID: "1370386e-d1d5-471c-a3cc-fcbc7649a549"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.033512 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "de62a4d5-de79-4ad5-983d-7071fb85dce8" (UID: "de62a4d5-de79-4ad5-983d-7071fb85dce8"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.039922 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1370386e-d1d5-471c-a3cc-fcbc7649a549-kube-api-access-qq5tl" (OuterVolumeSpecName: "kube-api-access-qq5tl") pod "1370386e-d1d5-471c-a3cc-fcbc7649a549" (UID: "1370386e-d1d5-471c-a3cc-fcbc7649a549"). InnerVolumeSpecName "kube-api-access-qq5tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.040249 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de62a4d5-de79-4ad5-983d-7071fb85dce8-kube-api-access-f5bq2" (OuterVolumeSpecName: "kube-api-access-f5bq2") pod "de62a4d5-de79-4ad5-983d-7071fb85dce8" (UID: "de62a4d5-de79-4ad5-983d-7071fb85dce8"). InnerVolumeSpecName "kube-api-access-f5bq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.040455 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "de62a4d5-de79-4ad5-983d-7071fb85dce8" (UID: "de62a4d5-de79-4ad5-983d-7071fb85dce8"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.056395 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.058955 4765 generic.go:334] "Generic (PLEG): container finished" podID="de62a4d5-de79-4ad5-983d-7071fb85dce8" containerID="46eb260759cded0c901a66b6878cac473a9c57ad591f9fb26605fa55db48b36e" exitCode=0 Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.059002 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.059038 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" event={"ID":"de62a4d5-de79-4ad5-983d-7071fb85dce8","Type":"ContainerDied","Data":"46eb260759cded0c901a66b6878cac473a9c57ad591f9fb26605fa55db48b36e"} Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.059068 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dzwvz" event={"ID":"de62a4d5-de79-4ad5-983d-7071fb85dce8","Type":"ContainerDied","Data":"22a865e2f709acb91802cb5d7502e87abd08379ea81180462dc7a8df7f550ae1"} Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.059102 4765 scope.go:117] "RemoveContainer" containerID="46eb260759cded0c901a66b6878cac473a9c57ad591f9fb26605fa55db48b36e" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.076080 4765 generic.go:334] "Generic (PLEG): container finished" podID="1370386e-d1d5-471c-a3cc-fcbc7649a549" containerID="104973b87a05e1b4152e671cd38eaeeae50bba60b0c523833591131d44ae49d6" exitCode=0 Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.076255 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pg48" event={"ID":"1370386e-d1d5-471c-a3cc-fcbc7649a549","Type":"ContainerDied","Data":"104973b87a05e1b4152e671cd38eaeeae50bba60b0c523833591131d44ae49d6"} Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.076291 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pg48" event={"ID":"1370386e-d1d5-471c-a3cc-fcbc7649a549","Type":"ContainerDied","Data":"994f13f636fbfe15eca9da132661e221e0a668b923b753219a43f613d5a39c0a"} Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.077539 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pg48" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.079734 4765 generic.go:334] "Generic (PLEG): container finished" podID="4bd12a18-d34b-4d96-9409-f26a13dc93f5" containerID="c6105f3175bbb9953416d8896274989977f2a79bfa64ab1de1508c18fa4d803f" exitCode=0 Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.079807 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwg7s" event={"ID":"4bd12a18-d34b-4d96-9409-f26a13dc93f5","Type":"ContainerDied","Data":"c6105f3175bbb9953416d8896274989977f2a79bfa64ab1de1508c18fa4d803f"} Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.079841 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwg7s" event={"ID":"4bd12a18-d34b-4d96-9409-f26a13dc93f5","Type":"ContainerDied","Data":"f58f06fecc04e2ea439b916e568dce5b2a60b8a56e98162348d99c791341f835"} Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.079894 4765 scope.go:117] "RemoveContainer" containerID="46eb260759cded0c901a66b6878cac473a9c57ad591f9fb26605fa55db48b36e" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.079986 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwg7s" Jan 21 13:07:22 crc kubenswrapper[4765]: E0121 13:07:22.080635 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46eb260759cded0c901a66b6878cac473a9c57ad591f9fb26605fa55db48b36e\": container with ID starting with 46eb260759cded0c901a66b6878cac473a9c57ad591f9fb26605fa55db48b36e not found: ID does not exist" containerID="46eb260759cded0c901a66b6878cac473a9c57ad591f9fb26605fa55db48b36e" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.080676 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46eb260759cded0c901a66b6878cac473a9c57ad591f9fb26605fa55db48b36e"} err="failed to get container status \"46eb260759cded0c901a66b6878cac473a9c57ad591f9fb26605fa55db48b36e\": rpc error: code = NotFound desc = could not find container \"46eb260759cded0c901a66b6878cac473a9c57ad591f9fb26605fa55db48b36e\": container with ID starting with 46eb260759cded0c901a66b6878cac473a9c57ad591f9fb26605fa55db48b36e not found: ID does not exist" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.080707 4765 scope.go:117] "RemoveContainer" containerID="104973b87a05e1b4152e671cd38eaeeae50bba60b0c523833591131d44ae49d6" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.086314 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.087351 4765 generic.go:334] "Generic (PLEG): container finished" podID="8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" containerID="58c292d8ab3268cd28e5b8aecb339e54f89484da567fe9425ca52492b24f2b5d" exitCode=0 Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.087427 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-csdrp" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.087433 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csdrp" event={"ID":"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85","Type":"ContainerDied","Data":"58c292d8ab3268cd28e5b8aecb339e54f89484da567fe9425ca52492b24f2b5d"} Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.087631 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-csdrp" event={"ID":"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85","Type":"ContainerDied","Data":"718233af53a0851c3b705ead2857d2d0d2267474d9660336e2a3ffe67275c20a"} Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.092376 4765 generic.go:334] "Generic (PLEG): container finished" podID="080522e6-050a-4df7-afe5-2476e455e157" containerID="498e752f0468710b1205a2632de0e291557e6ba1713647312e5db1c4642c6f48" exitCode=0 Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.092438 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7f8m" event={"ID":"080522e6-050a-4df7-afe5-2476e455e157","Type":"ContainerDied","Data":"498e752f0468710b1205a2632de0e291557e6ba1713647312e5db1c4642c6f48"} Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.092472 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7f8m" event={"ID":"080522e6-050a-4df7-afe5-2476e455e157","Type":"ContainerDied","Data":"9846c5992be783b7237249c484001d2a9c0df0b331d146c156b6728c8fec034e"} Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.093309 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7f8m" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.111321 4765 scope.go:117] "RemoveContainer" containerID="3220c81ab59dfbb9a0633c250991f1d610cf480fabb9e45f667e16ecdf936676" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.113054 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dzwvz"] Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.120332 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dzwvz"] Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.136562 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2frp\" (UniqueName: \"kubernetes.io/projected/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-kube-api-access-k2frp\") pod \"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85\" (UID: \"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85\") " Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.136974 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd12a18-d34b-4d96-9409-f26a13dc93f5-utilities\") pod \"4bd12a18-d34b-4d96-9409-f26a13dc93f5\" (UID: \"4bd12a18-d34b-4d96-9409-f26a13dc93f5\") " Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.137507 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk22t\" (UniqueName: \"kubernetes.io/projected/080522e6-050a-4df7-afe5-2476e455e157-kube-api-access-kk22t\") pod \"080522e6-050a-4df7-afe5-2476e455e157\" (UID: \"080522e6-050a-4df7-afe5-2476e455e157\") " Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.137666 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-utilities\") pod \"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85\" (UID: \"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85\") " Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.137864 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080522e6-050a-4df7-afe5-2476e455e157-catalog-content\") pod \"080522e6-050a-4df7-afe5-2476e455e157\" (UID: \"080522e6-050a-4df7-afe5-2476e455e157\") " Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.138001 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd12a18-d34b-4d96-9409-f26a13dc93f5-catalog-content\") pod \"4bd12a18-d34b-4d96-9409-f26a13dc93f5\" (UID: \"4bd12a18-d34b-4d96-9409-f26a13dc93f5\") " Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.138185 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-catalog-content\") pod \"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85\" (UID: \"8f46c9a8-ee1d-497c-92f3-d7f43ebddc85\") " Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.147599 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbgk4\" (UniqueName: \"kubernetes.io/projected/4bd12a18-d34b-4d96-9409-f26a13dc93f5-kube-api-access-qbgk4\") pod \"4bd12a18-d34b-4d96-9409-f26a13dc93f5\" (UID: \"4bd12a18-d34b-4d96-9409-f26a13dc93f5\") " Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.147719 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080522e6-050a-4df7-afe5-2476e455e157-utilities\") pod \"080522e6-050a-4df7-afe5-2476e455e157\" (UID: \"080522e6-050a-4df7-afe5-2476e455e157\") " Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.148332 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5bq2\" (UniqueName: \"kubernetes.io/projected/de62a4d5-de79-4ad5-983d-7071fb85dce8-kube-api-access-f5bq2\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.148354 4765 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.148368 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1370386e-d1d5-471c-a3cc-fcbc7649a549-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.148383 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq5tl\" (UniqueName: \"kubernetes.io/projected/1370386e-d1d5-471c-a3cc-fcbc7649a549-kube-api-access-qq5tl\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.148395 4765 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/de62a4d5-de79-4ad5-983d-7071fb85dce8-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.139372 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bd12a18-d34b-4d96-9409-f26a13dc93f5-utilities" (OuterVolumeSpecName: "utilities") pod "4bd12a18-d34b-4d96-9409-f26a13dc93f5" (UID: "4bd12a18-d34b-4d96-9409-f26a13dc93f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.139792 4765 scope.go:117] "RemoveContainer" containerID="1928ca2714dccac5c13007535548ff3f9087ff650b7ce84211d5bb9d793fad49" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.140467 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-kube-api-access-k2frp" (OuterVolumeSpecName: "kube-api-access-k2frp") pod "8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" (UID: "8f46c9a8-ee1d-497c-92f3-d7f43ebddc85"). InnerVolumeSpecName "kube-api-access-k2frp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.141883 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-utilities" (OuterVolumeSpecName: "utilities") pod "8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" (UID: "8f46c9a8-ee1d-497c-92f3-d7f43ebddc85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.145799 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/080522e6-050a-4df7-afe5-2476e455e157-kube-api-access-kk22t" (OuterVolumeSpecName: "kube-api-access-kk22t") pod "080522e6-050a-4df7-afe5-2476e455e157" (UID: "080522e6-050a-4df7-afe5-2476e455e157"). InnerVolumeSpecName "kube-api-access-kk22t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.149754 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/080522e6-050a-4df7-afe5-2476e455e157-utilities" (OuterVolumeSpecName: "utilities") pod "080522e6-050a-4df7-afe5-2476e455e157" (UID: "080522e6-050a-4df7-afe5-2476e455e157"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.152781 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bd12a18-d34b-4d96-9409-f26a13dc93f5-kube-api-access-qbgk4" (OuterVolumeSpecName: "kube-api-access-qbgk4") pod "4bd12a18-d34b-4d96-9409-f26a13dc93f5" (UID: "4bd12a18-d34b-4d96-9409-f26a13dc93f5"). InnerVolumeSpecName "kube-api-access-qbgk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.162333 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1370386e-d1d5-471c-a3cc-fcbc7649a549-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1370386e-d1d5-471c-a3cc-fcbc7649a549" (UID: "1370386e-d1d5-471c-a3cc-fcbc7649a549"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.179146 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bd12a18-d34b-4d96-9409-f26a13dc93f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bd12a18-d34b-4d96-9409-f26a13dc93f5" (UID: "4bd12a18-d34b-4d96-9409-f26a13dc93f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.198725 4765 scope.go:117] "RemoveContainer" containerID="104973b87a05e1b4152e671cd38eaeeae50bba60b0c523833591131d44ae49d6" Jan 21 13:07:22 crc kubenswrapper[4765]: E0121 13:07:22.200741 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"104973b87a05e1b4152e671cd38eaeeae50bba60b0c523833591131d44ae49d6\": container with ID starting with 104973b87a05e1b4152e671cd38eaeeae50bba60b0c523833591131d44ae49d6 not found: ID does not exist" containerID="104973b87a05e1b4152e671cd38eaeeae50bba60b0c523833591131d44ae49d6" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.200857 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"104973b87a05e1b4152e671cd38eaeeae50bba60b0c523833591131d44ae49d6"} err="failed to get container status \"104973b87a05e1b4152e671cd38eaeeae50bba60b0c523833591131d44ae49d6\": rpc error: code = NotFound desc = could not find container \"104973b87a05e1b4152e671cd38eaeeae50bba60b0c523833591131d44ae49d6\": container with ID starting with 104973b87a05e1b4152e671cd38eaeeae50bba60b0c523833591131d44ae49d6 not found: ID does not exist" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.200900 4765 scope.go:117] "RemoveContainer" containerID="3220c81ab59dfbb9a0633c250991f1d610cf480fabb9e45f667e16ecdf936676" Jan 21 13:07:22 crc kubenswrapper[4765]: E0121 13:07:22.203062 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3220c81ab59dfbb9a0633c250991f1d610cf480fabb9e45f667e16ecdf936676\": container with ID starting with 3220c81ab59dfbb9a0633c250991f1d610cf480fabb9e45f667e16ecdf936676 not found: ID does not exist" containerID="3220c81ab59dfbb9a0633c250991f1d610cf480fabb9e45f667e16ecdf936676" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.203203 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3220c81ab59dfbb9a0633c250991f1d610cf480fabb9e45f667e16ecdf936676"} err="failed to get container status \"3220c81ab59dfbb9a0633c250991f1d610cf480fabb9e45f667e16ecdf936676\": rpc error: code = NotFound desc = could not find container \"3220c81ab59dfbb9a0633c250991f1d610cf480fabb9e45f667e16ecdf936676\": container with ID starting with 3220c81ab59dfbb9a0633c250991f1d610cf480fabb9e45f667e16ecdf936676 not found: ID does not exist" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.203479 4765 scope.go:117] "RemoveContainer" containerID="1928ca2714dccac5c13007535548ff3f9087ff650b7ce84211d5bb9d793fad49" Jan 21 13:07:22 crc kubenswrapper[4765]: E0121 13:07:22.204441 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1928ca2714dccac5c13007535548ff3f9087ff650b7ce84211d5bb9d793fad49\": container with ID starting with 1928ca2714dccac5c13007535548ff3f9087ff650b7ce84211d5bb9d793fad49 not found: ID does not exist" containerID="1928ca2714dccac5c13007535548ff3f9087ff650b7ce84211d5bb9d793fad49" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.204495 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1928ca2714dccac5c13007535548ff3f9087ff650b7ce84211d5bb9d793fad49"} err="failed to get container status \"1928ca2714dccac5c13007535548ff3f9087ff650b7ce84211d5bb9d793fad49\": rpc error: code = NotFound desc = could not find container \"1928ca2714dccac5c13007535548ff3f9087ff650b7ce84211d5bb9d793fad49\": container with ID starting with 1928ca2714dccac5c13007535548ff3f9087ff650b7ce84211d5bb9d793fad49 not found: ID does not exist" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.204531 4765 scope.go:117] "RemoveContainer" containerID="c6105f3175bbb9953416d8896274989977f2a79bfa64ab1de1508c18fa4d803f" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.218719 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" (UID: "8f46c9a8-ee1d-497c-92f3-d7f43ebddc85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.232892 4765 scope.go:117] "RemoveContainer" containerID="870fb87cd9366a24bd6c45586c18561e14af0e254ffb111f70b457590026fc4f" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.249682 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bd12a18-d34b-4d96-9409-f26a13dc93f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.249721 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.249736 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbgk4\" (UniqueName: \"kubernetes.io/projected/4bd12a18-d34b-4d96-9409-f26a13dc93f5-kube-api-access-qbgk4\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.249751 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080522e6-050a-4df7-afe5-2476e455e157-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.249765 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2frp\" (UniqueName: \"kubernetes.io/projected/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-kube-api-access-k2frp\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.249778 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1370386e-d1d5-471c-a3cc-fcbc7649a549-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.249789 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bd12a18-d34b-4d96-9409-f26a13dc93f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.249801 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk22t\" (UniqueName: \"kubernetes.io/projected/080522e6-050a-4df7-afe5-2476e455e157-kube-api-access-kk22t\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.249816 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.251564 4765 scope.go:117] "RemoveContainer" containerID="d3cdb17f10189e446a27b2cf60eb416f848be038896bbde6a7355b133aeba8a6" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.266697 4765 scope.go:117] "RemoveContainer" containerID="c6105f3175bbb9953416d8896274989977f2a79bfa64ab1de1508c18fa4d803f" Jan 21 13:07:22 crc kubenswrapper[4765]: E0121 13:07:22.267315 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6105f3175bbb9953416d8896274989977f2a79bfa64ab1de1508c18fa4d803f\": container with ID starting with c6105f3175bbb9953416d8896274989977f2a79bfa64ab1de1508c18fa4d803f not found: ID does not exist" containerID="c6105f3175bbb9953416d8896274989977f2a79bfa64ab1de1508c18fa4d803f" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.267383 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6105f3175bbb9953416d8896274989977f2a79bfa64ab1de1508c18fa4d803f"} err="failed to get container status \"c6105f3175bbb9953416d8896274989977f2a79bfa64ab1de1508c18fa4d803f\": rpc error: code = NotFound desc = could not find container \"c6105f3175bbb9953416d8896274989977f2a79bfa64ab1de1508c18fa4d803f\": container with ID starting with c6105f3175bbb9953416d8896274989977f2a79bfa64ab1de1508c18fa4d803f not found: ID does not exist" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.267427 4765 scope.go:117] "RemoveContainer" containerID="870fb87cd9366a24bd6c45586c18561e14af0e254ffb111f70b457590026fc4f" Jan 21 13:07:22 crc kubenswrapper[4765]: E0121 13:07:22.267779 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"870fb87cd9366a24bd6c45586c18561e14af0e254ffb111f70b457590026fc4f\": container with ID starting with 870fb87cd9366a24bd6c45586c18561e14af0e254ffb111f70b457590026fc4f not found: ID does not exist" containerID="870fb87cd9366a24bd6c45586c18561e14af0e254ffb111f70b457590026fc4f" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.267825 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870fb87cd9366a24bd6c45586c18561e14af0e254ffb111f70b457590026fc4f"} err="failed to get container status \"870fb87cd9366a24bd6c45586c18561e14af0e254ffb111f70b457590026fc4f\": rpc error: code = NotFound desc = could not find container \"870fb87cd9366a24bd6c45586c18561e14af0e254ffb111f70b457590026fc4f\": container with ID starting with 870fb87cd9366a24bd6c45586c18561e14af0e254ffb111f70b457590026fc4f not found: ID does not exist" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.267855 4765 scope.go:117] "RemoveContainer" containerID="d3cdb17f10189e446a27b2cf60eb416f848be038896bbde6a7355b133aeba8a6" Jan 21 13:07:22 crc kubenswrapper[4765]: E0121 13:07:22.268269 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3cdb17f10189e446a27b2cf60eb416f848be038896bbde6a7355b133aeba8a6\": container with ID starting with d3cdb17f10189e446a27b2cf60eb416f848be038896bbde6a7355b133aeba8a6 not found: ID does not exist" containerID="d3cdb17f10189e446a27b2cf60eb416f848be038896bbde6a7355b133aeba8a6" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.268304 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3cdb17f10189e446a27b2cf60eb416f848be038896bbde6a7355b133aeba8a6"} err="failed to get container status \"d3cdb17f10189e446a27b2cf60eb416f848be038896bbde6a7355b133aeba8a6\": rpc error: code = NotFound desc = could not find container \"d3cdb17f10189e446a27b2cf60eb416f848be038896bbde6a7355b133aeba8a6\": container with ID starting with d3cdb17f10189e446a27b2cf60eb416f848be038896bbde6a7355b133aeba8a6 not found: ID does not exist" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.268319 4765 scope.go:117] "RemoveContainer" containerID="58c292d8ab3268cd28e5b8aecb339e54f89484da567fe9425ca52492b24f2b5d" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.284271 4765 scope.go:117] "RemoveContainer" containerID="c0df5f6988cd3207387ebacc21ed589b506f5b47953d43a7b0387d144b0792e0" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.302236 4765 scope.go:117] "RemoveContainer" containerID="36cdad04543e88b5b41f0d3bbef7aca4b092dfdc798be4da5b76436d04f6e0bb" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.311251 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/080522e6-050a-4df7-afe5-2476e455e157-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "080522e6-050a-4df7-afe5-2476e455e157" (UID: "080522e6-050a-4df7-afe5-2476e455e157"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.317615 4765 scope.go:117] "RemoveContainer" containerID="58c292d8ab3268cd28e5b8aecb339e54f89484da567fe9425ca52492b24f2b5d" Jan 21 13:07:22 crc kubenswrapper[4765]: E0121 13:07:22.318169 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58c292d8ab3268cd28e5b8aecb339e54f89484da567fe9425ca52492b24f2b5d\": container with ID starting with 58c292d8ab3268cd28e5b8aecb339e54f89484da567fe9425ca52492b24f2b5d not found: ID does not exist" containerID="58c292d8ab3268cd28e5b8aecb339e54f89484da567fe9425ca52492b24f2b5d" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.318237 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58c292d8ab3268cd28e5b8aecb339e54f89484da567fe9425ca52492b24f2b5d"} err="failed to get container status \"58c292d8ab3268cd28e5b8aecb339e54f89484da567fe9425ca52492b24f2b5d\": rpc error: code = NotFound desc = could not find container \"58c292d8ab3268cd28e5b8aecb339e54f89484da567fe9425ca52492b24f2b5d\": container with ID starting with 58c292d8ab3268cd28e5b8aecb339e54f89484da567fe9425ca52492b24f2b5d not found: ID does not exist" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.318283 4765 scope.go:117] "RemoveContainer" containerID="c0df5f6988cd3207387ebacc21ed589b506f5b47953d43a7b0387d144b0792e0" Jan 21 13:07:22 crc kubenswrapper[4765]: E0121 13:07:22.318696 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0df5f6988cd3207387ebacc21ed589b506f5b47953d43a7b0387d144b0792e0\": container with ID starting with c0df5f6988cd3207387ebacc21ed589b506f5b47953d43a7b0387d144b0792e0 not found: ID does not exist" containerID="c0df5f6988cd3207387ebacc21ed589b506f5b47953d43a7b0387d144b0792e0" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.318746 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0df5f6988cd3207387ebacc21ed589b506f5b47953d43a7b0387d144b0792e0"} err="failed to get container status \"c0df5f6988cd3207387ebacc21ed589b506f5b47953d43a7b0387d144b0792e0\": rpc error: code = NotFound desc = could not find container \"c0df5f6988cd3207387ebacc21ed589b506f5b47953d43a7b0387d144b0792e0\": container with ID starting with c0df5f6988cd3207387ebacc21ed589b506f5b47953d43a7b0387d144b0792e0 not found: ID does not exist" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.318786 4765 scope.go:117] "RemoveContainer" containerID="36cdad04543e88b5b41f0d3bbef7aca4b092dfdc798be4da5b76436d04f6e0bb" Jan 21 13:07:22 crc kubenswrapper[4765]: E0121 13:07:22.319066 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36cdad04543e88b5b41f0d3bbef7aca4b092dfdc798be4da5b76436d04f6e0bb\": container with ID starting with 36cdad04543e88b5b41f0d3bbef7aca4b092dfdc798be4da5b76436d04f6e0bb not found: ID does not exist" containerID="36cdad04543e88b5b41f0d3bbef7aca4b092dfdc798be4da5b76436d04f6e0bb" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.319094 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36cdad04543e88b5b41f0d3bbef7aca4b092dfdc798be4da5b76436d04f6e0bb"} err="failed to get container status \"36cdad04543e88b5b41f0d3bbef7aca4b092dfdc798be4da5b76436d04f6e0bb\": rpc error: code = NotFound desc = could not find container \"36cdad04543e88b5b41f0d3bbef7aca4b092dfdc798be4da5b76436d04f6e0bb\": container with ID starting with 36cdad04543e88b5b41f0d3bbef7aca4b092dfdc798be4da5b76436d04f6e0bb not found: ID does not exist" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.319109 4765 scope.go:117] "RemoveContainer" containerID="498e752f0468710b1205a2632de0e291557e6ba1713647312e5db1c4642c6f48" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.335760 4765 scope.go:117] "RemoveContainer" containerID="e4e74cf0e1d966f812bc72af991a6e65f47cfc057b78aacfe740b216a98d8f02" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.350956 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080522e6-050a-4df7-afe5-2476e455e157-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.353710 4765 scope.go:117] "RemoveContainer" containerID="54b404a2c40714d1c65b92f721cfaf12fb632037ebd9b289fdcc3461b6544781" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.368667 4765 scope.go:117] "RemoveContainer" containerID="498e752f0468710b1205a2632de0e291557e6ba1713647312e5db1c4642c6f48" Jan 21 13:07:22 crc kubenswrapper[4765]: E0121 13:07:22.369273 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"498e752f0468710b1205a2632de0e291557e6ba1713647312e5db1c4642c6f48\": container with ID starting with 498e752f0468710b1205a2632de0e291557e6ba1713647312e5db1c4642c6f48 not found: ID does not exist" containerID="498e752f0468710b1205a2632de0e291557e6ba1713647312e5db1c4642c6f48" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.369349 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498e752f0468710b1205a2632de0e291557e6ba1713647312e5db1c4642c6f48"} err="failed to get container status \"498e752f0468710b1205a2632de0e291557e6ba1713647312e5db1c4642c6f48\": rpc error: code = NotFound desc = could not find container \"498e752f0468710b1205a2632de0e291557e6ba1713647312e5db1c4642c6f48\": container with ID starting with 498e752f0468710b1205a2632de0e291557e6ba1713647312e5db1c4642c6f48 not found: ID does not exist" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.369387 4765 scope.go:117] "RemoveContainer" containerID="e4e74cf0e1d966f812bc72af991a6e65f47cfc057b78aacfe740b216a98d8f02" Jan 21 13:07:22 crc kubenswrapper[4765]: E0121 13:07:22.370552 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4e74cf0e1d966f812bc72af991a6e65f47cfc057b78aacfe740b216a98d8f02\": container with ID starting with e4e74cf0e1d966f812bc72af991a6e65f47cfc057b78aacfe740b216a98d8f02 not found: ID does not exist" containerID="e4e74cf0e1d966f812bc72af991a6e65f47cfc057b78aacfe740b216a98d8f02" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.370593 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e74cf0e1d966f812bc72af991a6e65f47cfc057b78aacfe740b216a98d8f02"} err="failed to get container status \"e4e74cf0e1d966f812bc72af991a6e65f47cfc057b78aacfe740b216a98d8f02\": rpc error: code = NotFound desc = could not find container \"e4e74cf0e1d966f812bc72af991a6e65f47cfc057b78aacfe740b216a98d8f02\": container with ID starting with e4e74cf0e1d966f812bc72af991a6e65f47cfc057b78aacfe740b216a98d8f02 not found: ID does not exist" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.370629 4765 scope.go:117] "RemoveContainer" containerID="54b404a2c40714d1c65b92f721cfaf12fb632037ebd9b289fdcc3461b6544781" Jan 21 13:07:22 crc kubenswrapper[4765]: E0121 13:07:22.371077 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54b404a2c40714d1c65b92f721cfaf12fb632037ebd9b289fdcc3461b6544781\": container with ID starting with 54b404a2c40714d1c65b92f721cfaf12fb632037ebd9b289fdcc3461b6544781 not found: ID does not exist" containerID="54b404a2c40714d1c65b92f721cfaf12fb632037ebd9b289fdcc3461b6544781" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.371139 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54b404a2c40714d1c65b92f721cfaf12fb632037ebd9b289fdcc3461b6544781"} err="failed to get container status \"54b404a2c40714d1c65b92f721cfaf12fb632037ebd9b289fdcc3461b6544781\": rpc error: code = NotFound desc = could not find container \"54b404a2c40714d1c65b92f721cfaf12fb632037ebd9b289fdcc3461b6544781\": container with ID starting with 54b404a2c40714d1c65b92f721cfaf12fb632037ebd9b289fdcc3461b6544781 not found: ID does not exist" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.445024 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8pg48"] Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.450773 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8pg48"] Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.462953 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwg7s"] Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.468888 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwg7s"] Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.473449 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-csdrp"] Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.480706 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.494691 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-csdrp"] Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.502537 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.503465 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7f8m"] Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.507377 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x7f8m"] Jan 21 13:07:22 crc kubenswrapper[4765]: I0121 13:07:22.984752 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 13:07:23 crc kubenswrapper[4765]: I0121 13:07:23.003767 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 13:07:23 crc kubenswrapper[4765]: I0121 13:07:23.509532 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 13:07:23 crc kubenswrapper[4765]: I0121 13:07:23.622250 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="080522e6-050a-4df7-afe5-2476e455e157" path="/var/lib/kubelet/pods/080522e6-050a-4df7-afe5-2476e455e157/volumes" Jan 21 13:07:23 crc kubenswrapper[4765]: I0121 13:07:23.622927 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1370386e-d1d5-471c-a3cc-fcbc7649a549" path="/var/lib/kubelet/pods/1370386e-d1d5-471c-a3cc-fcbc7649a549/volumes" Jan 21 13:07:23 crc kubenswrapper[4765]: I0121 13:07:23.623616 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bd12a18-d34b-4d96-9409-f26a13dc93f5" path="/var/lib/kubelet/pods/4bd12a18-d34b-4d96-9409-f26a13dc93f5/volumes" Jan 21 13:07:23 crc kubenswrapper[4765]: I0121 13:07:23.624725 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" path="/var/lib/kubelet/pods/8f46c9a8-ee1d-497c-92f3-d7f43ebddc85/volumes" Jan 21 13:07:23 crc kubenswrapper[4765]: I0121 13:07:23.625449 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de62a4d5-de79-4ad5-983d-7071fb85dce8" path="/var/lib/kubelet/pods/de62a4d5-de79-4ad5-983d-7071fb85dce8/volumes" Jan 21 13:07:23 crc kubenswrapper[4765]: I0121 13:07:23.651741 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 13:07:24 crc kubenswrapper[4765]: I0121 13:07:24.839553 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 13:07:28 crc kubenswrapper[4765]: I0121 13:07:28.797044 4765 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 13:07:28 crc kubenswrapper[4765]: I0121 13:07:28.798273 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://7ce4552dff00d86594c37e0f77c3cf45aa3bcc555401456b5ff0411bc246106a" gracePeriod=5 Jan 21 13:07:31 crc kubenswrapper[4765]: I0121 13:07:31.845795 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 13:07:32 crc kubenswrapper[4765]: I0121 13:07:32.614267 4765 scope.go:117] "RemoveContainer" containerID="fd7256b572d35d51a89a26a9d809e66ddaef4668fab63ce1fc2ed58bfc6736f7" Jan 21 13:07:33 crc kubenswrapper[4765]: I0121 13:07:33.175082 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/2.log" Jan 21 13:07:33 crc kubenswrapper[4765]: I0121 13:07:33.175426 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b434a20db972429a5693ff0e4a81a4b323367ac7349b6f1f16c3e1e5e890f70b"} Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.183726 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.184160 4765 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="7ce4552dff00d86594c37e0f77c3cf45aa3bcc555401456b5ff0411bc246106a" exitCode=137 Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.375710 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.375803 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.449411 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.449501 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.449528 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.449554 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.449585 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.449810 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.450507 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.450628 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.450813 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.459990 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.551433 4765 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.551485 4765 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.551496 4765 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.551504 4765 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:34 crc kubenswrapper[4765]: I0121 13:07:34.551513 4765 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 13:07:35 crc kubenswrapper[4765]: I0121 13:07:35.193107 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 13:07:35 crc kubenswrapper[4765]: I0121 13:07:35.193229 4765 scope.go:117] "RemoveContainer" containerID="7ce4552dff00d86594c37e0f77c3cf45aa3bcc555401456b5ff0411bc246106a" Jan 21 13:07:35 crc kubenswrapper[4765]: I0121 13:07:35.193375 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 13:07:35 crc kubenswrapper[4765]: I0121 13:07:35.623950 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 13:07:35 crc kubenswrapper[4765]: I0121 13:07:35.624321 4765 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 21 13:07:35 crc kubenswrapper[4765]: I0121 13:07:35.640010 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 13:07:35 crc kubenswrapper[4765]: I0121 13:07:35.640443 4765 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="51516b55-a7ef-4208-a94f-663e43615cf3" Jan 21 13:07:35 crc kubenswrapper[4765]: I0121 13:07:35.649580 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 13:07:35 crc kubenswrapper[4765]: I0121 13:07:35.649641 4765 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="51516b55-a7ef-4208-a94f-663e43615cf3" Jan 21 13:07:41 crc kubenswrapper[4765]: I0121 13:07:41.254683 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 21 13:07:41 crc kubenswrapper[4765]: I0121 13:07:41.256989 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 13:07:41 crc kubenswrapper[4765]: I0121 13:07:41.257079 4765 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="bdcb8c297cbd6fda01e719c58cef4ef067896dbd93516b16f73b6e62d1ad8fe2" exitCode=137 Jan 21 13:07:41 crc kubenswrapper[4765]: I0121 13:07:41.257132 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"bdcb8c297cbd6fda01e719c58cef4ef067896dbd93516b16f73b6e62d1ad8fe2"} Jan 21 13:07:41 crc kubenswrapper[4765]: I0121 13:07:41.257177 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c9f1f22995d5070c8275a209c15dd59b6cb11e2b762ac8fe06326dba1ec6c459"} Jan 21 13:07:41 crc kubenswrapper[4765]: I0121 13:07:41.257202 4765 scope.go:117] "RemoveContainer" containerID="cff66ff27b56dd5762c4175119f289eddff5ee0905d500b7fe67147707880c91" Jan 21 13:07:42 crc kubenswrapper[4765]: I0121 13:07:42.266301 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 21 13:07:46 crc kubenswrapper[4765]: I0121 13:07:46.869501 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:07:50 crc kubenswrapper[4765]: I0121 13:07:50.320308 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:07:50 crc kubenswrapper[4765]: I0121 13:07:50.320700 4765 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 21 13:07:50 crc kubenswrapper[4765]: I0121 13:07:50.320775 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 21 13:08:00 crc kubenswrapper[4765]: I0121 13:08:00.325349 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:08:00 crc kubenswrapper[4765]: I0121 13:08:00.330546 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.836623 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq"] Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.836996 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhjpc"] Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.837232 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" podUID="fc58cdb9-8e5c-426c-a193-994e3b2ce117" containerName="controller-manager" containerID="cri-o://7daf20d8f550c1dae853b4d7a1662050a7ba378e76433339532a2fe3175fdeec" gracePeriod=30 Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.837463 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" podUID="6525c86b-8810-4639-8d16-93d25fac15a9" containerName="route-controller-manager" containerID="cri-o://2e23bc394912bb8db67faefe272b87f521c37c4cb2b09a7e6379c1b9824c921b" gracePeriod=30 Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881236 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7bhqm"] Jan 21 13:08:08 crc kubenswrapper[4765]: E0121 13:08:08.881513 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080522e6-050a-4df7-afe5-2476e455e157" containerName="extract-utilities" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881527 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="080522e6-050a-4df7-afe5-2476e455e157" containerName="extract-utilities" Jan 21 13:08:08 crc kubenswrapper[4765]: E0121 13:08:08.881539 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1370386e-d1d5-471c-a3cc-fcbc7649a549" containerName="registry-server" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881545 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1370386e-d1d5-471c-a3cc-fcbc7649a549" containerName="registry-server" Jan 21 13:08:08 crc kubenswrapper[4765]: E0121 13:08:08.881555 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1370386e-d1d5-471c-a3cc-fcbc7649a549" containerName="extract-content" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881564 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1370386e-d1d5-471c-a3cc-fcbc7649a549" containerName="extract-content" Jan 21 13:08:08 crc kubenswrapper[4765]: E0121 13:08:08.881573 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd12a18-d34b-4d96-9409-f26a13dc93f5" containerName="registry-server" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881580 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd12a18-d34b-4d96-9409-f26a13dc93f5" containerName="registry-server" Jan 21 13:08:08 crc kubenswrapper[4765]: E0121 13:08:08.881593 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" containerName="extract-content" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881599 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" containerName="extract-content" Jan 21 13:08:08 crc kubenswrapper[4765]: E0121 13:08:08.881610 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd12a18-d34b-4d96-9409-f26a13dc93f5" containerName="extract-utilities" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881615 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd12a18-d34b-4d96-9409-f26a13dc93f5" containerName="extract-utilities" Jan 21 13:08:08 crc kubenswrapper[4765]: E0121 13:08:08.881625 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" containerName="registry-server" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881631 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" containerName="registry-server" Jan 21 13:08:08 crc kubenswrapper[4765]: E0121 13:08:08.881637 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de62a4d5-de79-4ad5-983d-7071fb85dce8" containerName="marketplace-operator" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881645 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="de62a4d5-de79-4ad5-983d-7071fb85dce8" containerName="marketplace-operator" Jan 21 13:08:08 crc kubenswrapper[4765]: E0121 13:08:08.881655 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080522e6-050a-4df7-afe5-2476e455e157" containerName="registry-server" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881661 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="080522e6-050a-4df7-afe5-2476e455e157" containerName="registry-server" Jan 21 13:08:08 crc kubenswrapper[4765]: E0121 13:08:08.881672 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080522e6-050a-4df7-afe5-2476e455e157" containerName="extract-content" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881680 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="080522e6-050a-4df7-afe5-2476e455e157" containerName="extract-content" Jan 21 13:08:08 crc kubenswrapper[4765]: E0121 13:08:08.881689 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" containerName="extract-utilities" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881696 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" containerName="extract-utilities" Jan 21 13:08:08 crc kubenswrapper[4765]: E0121 13:08:08.881704 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881711 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 13:08:08 crc kubenswrapper[4765]: E0121 13:08:08.881717 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd12a18-d34b-4d96-9409-f26a13dc93f5" containerName="extract-content" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881722 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd12a18-d34b-4d96-9409-f26a13dc93f5" containerName="extract-content" Jan 21 13:08:08 crc kubenswrapper[4765]: E0121 13:08:08.881729 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1370386e-d1d5-471c-a3cc-fcbc7649a549" containerName="extract-utilities" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881735 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1370386e-d1d5-471c-a3cc-fcbc7649a549" containerName="extract-utilities" Jan 21 13:08:08 crc kubenswrapper[4765]: E0121 13:08:08.881744 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a0b174-c516-4ed9-9204-e1f15dd18d59" containerName="installer" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881753 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a0b174-c516-4ed9-9204-e1f15dd18d59" containerName="installer" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881853 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="de62a4d5-de79-4ad5-983d-7071fb85dce8" containerName="marketplace-operator" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881867 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bd12a18-d34b-4d96-9409-f26a13dc93f5" containerName="registry-server" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881876 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f46c9a8-ee1d-497c-92f3-d7f43ebddc85" containerName="registry-server" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881884 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="1370386e-d1d5-471c-a3cc-fcbc7649a549" containerName="registry-server" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881890 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a0b174-c516-4ed9-9204-e1f15dd18d59" containerName="installer" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881897 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="080522e6-050a-4df7-afe5-2476e455e157" containerName="registry-server" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.881909 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.882363 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.900732 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.900986 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.901028 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.901719 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.909200 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 13:08:08 crc kubenswrapper[4765]: I0121 13:08:08.922480 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7bhqm"] Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.065262 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg42h\" (UniqueName: \"kubernetes.io/projected/ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6-kube-api-access-mg42h\") pod \"marketplace-operator-79b997595-7bhqm\" (UID: \"ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.065340 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7bhqm\" (UID: \"ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.065406 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7bhqm\" (UID: \"ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.167620 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg42h\" (UniqueName: \"kubernetes.io/projected/ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6-kube-api-access-mg42h\") pod \"marketplace-operator-79b997595-7bhqm\" (UID: \"ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.167685 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7bhqm\" (UID: \"ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.167733 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7bhqm\" (UID: \"ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.171123 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7bhqm\" (UID: \"ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.196107 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7bhqm\" (UID: \"ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.219376 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg42h\" (UniqueName: \"kubernetes.io/projected/ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6-kube-api-access-mg42h\") pod \"marketplace-operator-79b997595-7bhqm\" (UID: \"ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6\") " pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.316207 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.324319 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.418988 4765 generic.go:334] "Generic (PLEG): container finished" podID="6525c86b-8810-4639-8d16-93d25fac15a9" containerID="2e23bc394912bb8db67faefe272b87f521c37c4cb2b09a7e6379c1b9824c921b" exitCode=0 Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.419064 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" event={"ID":"6525c86b-8810-4639-8d16-93d25fac15a9","Type":"ContainerDied","Data":"2e23bc394912bb8db67faefe272b87f521c37c4cb2b09a7e6379c1b9824c921b"} Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.419100 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" event={"ID":"6525c86b-8810-4639-8d16-93d25fac15a9","Type":"ContainerDied","Data":"b11fdccfeaa59d16165d7b17dde9993f7ee695e01951d26564cd17580bfceaac"} Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.419119 4765 scope.go:117] "RemoveContainer" containerID="2e23bc394912bb8db67faefe272b87f521c37c4cb2b09a7e6379c1b9824c921b" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.419276 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.421628 4765 generic.go:334] "Generic (PLEG): container finished" podID="fc58cdb9-8e5c-426c-a193-994e3b2ce117" containerID="7daf20d8f550c1dae853b4d7a1662050a7ba378e76433339532a2fe3175fdeec" exitCode=0 Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.421667 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" event={"ID":"fc58cdb9-8e5c-426c-a193-994e3b2ce117","Type":"ContainerDied","Data":"7daf20d8f550c1dae853b4d7a1662050a7ba378e76433339532a2fe3175fdeec"} Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.421692 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" event={"ID":"fc58cdb9-8e5c-426c-a193-994e3b2ce117","Type":"ContainerDied","Data":"60922b69e1ca0878eeab7681e6c6936be5df495b6e464185ebe754ab6d62a4f0"} Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.421747 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhjpc" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.439007 4765 scope.go:117] "RemoveContainer" containerID="2e23bc394912bb8db67faefe272b87f521c37c4cb2b09a7e6379c1b9824c921b" Jan 21 13:08:09 crc kubenswrapper[4765]: E0121 13:08:09.442808 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e23bc394912bb8db67faefe272b87f521c37c4cb2b09a7e6379c1b9824c921b\": container with ID starting with 2e23bc394912bb8db67faefe272b87f521c37c4cb2b09a7e6379c1b9824c921b not found: ID does not exist" containerID="2e23bc394912bb8db67faefe272b87f521c37c4cb2b09a7e6379c1b9824c921b" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.442865 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e23bc394912bb8db67faefe272b87f521c37c4cb2b09a7e6379c1b9824c921b"} err="failed to get container status \"2e23bc394912bb8db67faefe272b87f521c37c4cb2b09a7e6379c1b9824c921b\": rpc error: code = NotFound desc = could not find container \"2e23bc394912bb8db67faefe272b87f521c37c4cb2b09a7e6379c1b9824c921b\": container with ID starting with 2e23bc394912bb8db67faefe272b87f521c37c4cb2b09a7e6379c1b9824c921b not found: ID does not exist" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.442903 4765 scope.go:117] "RemoveContainer" containerID="7daf20d8f550c1dae853b4d7a1662050a7ba378e76433339532a2fe3175fdeec" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.473760 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-config\") pod \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.473827 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6525c86b-8810-4639-8d16-93d25fac15a9-serving-cert\") pod \"6525c86b-8810-4639-8d16-93d25fac15a9\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.473885 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6525c86b-8810-4639-8d16-93d25fac15a9-config\") pod \"6525c86b-8810-4639-8d16-93d25fac15a9\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.473907 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc58cdb9-8e5c-426c-a193-994e3b2ce117-serving-cert\") pod \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.473940 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk7g5\" (UniqueName: \"kubernetes.io/projected/6525c86b-8810-4639-8d16-93d25fac15a9-kube-api-access-nk7g5\") pod \"6525c86b-8810-4639-8d16-93d25fac15a9\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.473975 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-proxy-ca-bundles\") pod \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.474016 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-client-ca\") pod \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.474052 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6525c86b-8810-4639-8d16-93d25fac15a9-client-ca\") pod \"6525c86b-8810-4639-8d16-93d25fac15a9\" (UID: \"6525c86b-8810-4639-8d16-93d25fac15a9\") " Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.474147 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk2v4\" (UniqueName: \"kubernetes.io/projected/fc58cdb9-8e5c-426c-a193-994e3b2ce117-kube-api-access-tk2v4\") pod \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\" (UID: \"fc58cdb9-8e5c-426c-a193-994e3b2ce117\") " Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.476153 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fc58cdb9-8e5c-426c-a193-994e3b2ce117" (UID: "fc58cdb9-8e5c-426c-a193-994e3b2ce117"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.476156 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6525c86b-8810-4639-8d16-93d25fac15a9-config" (OuterVolumeSpecName: "config") pod "6525c86b-8810-4639-8d16-93d25fac15a9" (UID: "6525c86b-8810-4639-8d16-93d25fac15a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.476749 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-client-ca" (OuterVolumeSpecName: "client-ca") pod "fc58cdb9-8e5c-426c-a193-994e3b2ce117" (UID: "fc58cdb9-8e5c-426c-a193-994e3b2ce117"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.476795 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-config" (OuterVolumeSpecName: "config") pod "fc58cdb9-8e5c-426c-a193-994e3b2ce117" (UID: "fc58cdb9-8e5c-426c-a193-994e3b2ce117"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.478450 4765 scope.go:117] "RemoveContainer" containerID="7daf20d8f550c1dae853b4d7a1662050a7ba378e76433339532a2fe3175fdeec" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.481548 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6525c86b-8810-4639-8d16-93d25fac15a9-kube-api-access-nk7g5" (OuterVolumeSpecName: "kube-api-access-nk7g5") pod "6525c86b-8810-4639-8d16-93d25fac15a9" (UID: "6525c86b-8810-4639-8d16-93d25fac15a9"). InnerVolumeSpecName "kube-api-access-nk7g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.483313 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6525c86b-8810-4639-8d16-93d25fac15a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "6525c86b-8810-4639-8d16-93d25fac15a9" (UID: "6525c86b-8810-4639-8d16-93d25fac15a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.483739 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc58cdb9-8e5c-426c-a193-994e3b2ce117-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fc58cdb9-8e5c-426c-a193-994e3b2ce117" (UID: "fc58cdb9-8e5c-426c-a193-994e3b2ce117"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.484062 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6525c86b-8810-4639-8d16-93d25fac15a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6525c86b-8810-4639-8d16-93d25fac15a9" (UID: "6525c86b-8810-4639-8d16-93d25fac15a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.486900 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc58cdb9-8e5c-426c-a193-994e3b2ce117-kube-api-access-tk2v4" (OuterVolumeSpecName: "kube-api-access-tk2v4") pod "fc58cdb9-8e5c-426c-a193-994e3b2ce117" (UID: "fc58cdb9-8e5c-426c-a193-994e3b2ce117"). InnerVolumeSpecName "kube-api-access-tk2v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:08:09 crc kubenswrapper[4765]: E0121 13:08:09.487096 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7daf20d8f550c1dae853b4d7a1662050a7ba378e76433339532a2fe3175fdeec\": container with ID starting with 7daf20d8f550c1dae853b4d7a1662050a7ba378e76433339532a2fe3175fdeec not found: ID does not exist" containerID="7daf20d8f550c1dae853b4d7a1662050a7ba378e76433339532a2fe3175fdeec" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.487167 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7daf20d8f550c1dae853b4d7a1662050a7ba378e76433339532a2fe3175fdeec"} err="failed to get container status \"7daf20d8f550c1dae853b4d7a1662050a7ba378e76433339532a2fe3175fdeec\": rpc error: code = NotFound desc = could not find container \"7daf20d8f550c1dae853b4d7a1662050a7ba378e76433339532a2fe3175fdeec\": container with ID starting with 7daf20d8f550c1dae853b4d7a1662050a7ba378e76433339532a2fe3175fdeec not found: ID does not exist" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.514682 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.575979 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.576018 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6525c86b-8810-4639-8d16-93d25fac15a9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.576028 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6525c86b-8810-4639-8d16-93d25fac15a9-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.576038 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc58cdb9-8e5c-426c-a193-994e3b2ce117-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.576049 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk7g5\" (UniqueName: \"kubernetes.io/projected/6525c86b-8810-4639-8d16-93d25fac15a9-kube-api-access-nk7g5\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.576058 4765 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.576073 4765 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc58cdb9-8e5c-426c-a193-994e3b2ce117-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.576086 4765 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6525c86b-8810-4639-8d16-93d25fac15a9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.576094 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk2v4\" (UniqueName: \"kubernetes.io/projected/fc58cdb9-8e5c-426c-a193-994e3b2ce117-kube-api-access-tk2v4\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.740548 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7bhqm"] Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.743374 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq"] Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.745836 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-t8tsq"] Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.758179 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhjpc"] Jan 21 13:08:09 crc kubenswrapper[4765]: I0121 13:08:09.763450 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhjpc"] Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.430443 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" event={"ID":"ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6","Type":"ContainerStarted","Data":"487eface61e60f83e5bb0400602e2e1ade9433733d33f52770c83cb7ea0e37e3"} Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.430495 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" event={"ID":"ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6","Type":"ContainerStarted","Data":"47a09174d3d3b46711b626cc9e7872d67ce739503fdb60910fbae18074440b1d"} Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.433436 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.434822 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.449167 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7bhqm" podStartSLOduration=2.449143317 podStartE2EDuration="2.449143317s" podCreationTimestamp="2026-01-21 13:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:08:10.4480771 +0000 UTC m=+351.465802922" watchObservedRunningTime="2026-01-21 13:08:10.449143317 +0000 UTC m=+351.466869139" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.557801 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xfs5k"] Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.911280 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj"] Jan 21 13:08:10 crc kubenswrapper[4765]: E0121 13:08:10.911626 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc58cdb9-8e5c-426c-a193-994e3b2ce117" containerName="controller-manager" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.911654 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc58cdb9-8e5c-426c-a193-994e3b2ce117" containerName="controller-manager" Jan 21 13:08:10 crc kubenswrapper[4765]: E0121 13:08:10.911689 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6525c86b-8810-4639-8d16-93d25fac15a9" containerName="route-controller-manager" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.911701 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="6525c86b-8810-4639-8d16-93d25fac15a9" containerName="route-controller-manager" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.911826 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="6525c86b-8810-4639-8d16-93d25fac15a9" containerName="route-controller-manager" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.911851 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc58cdb9-8e5c-426c-a193-994e3b2ce117" containerName="controller-manager" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.912421 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.918588 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.919560 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.919789 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.920455 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.920740 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.920983 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.923609 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt"] Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.924617 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.929878 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj"] Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.931056 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.931175 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.931254 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.931304 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.931447 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.931617 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.939073 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 13:08:10 crc kubenswrapper[4765]: I0121 13:08:10.948274 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt"] Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.103116 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-client-ca\") pod \"controller-manager-6cc5c7dcff-t6nwt\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.103172 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-config\") pod \"controller-manager-6cc5c7dcff-t6nwt\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.103198 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzzrj\" (UniqueName: \"kubernetes.io/projected/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-kube-api-access-gzzrj\") pod \"controller-manager-6cc5c7dcff-t6nwt\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.103248 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-config\") pod \"route-controller-manager-85968449f7-8p8cj\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.103283 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dntqh\" (UniqueName: \"kubernetes.io/projected/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-kube-api-access-dntqh\") pod \"route-controller-manager-85968449f7-8p8cj\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.103306 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-serving-cert\") pod \"route-controller-manager-85968449f7-8p8cj\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.103350 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-proxy-ca-bundles\") pod \"controller-manager-6cc5c7dcff-t6nwt\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.103396 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-serving-cert\") pod \"controller-manager-6cc5c7dcff-t6nwt\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.103429 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-client-ca\") pod \"route-controller-manager-85968449f7-8p8cj\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.204684 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-client-ca\") pod \"route-controller-manager-85968449f7-8p8cj\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.205131 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-client-ca\") pod \"controller-manager-6cc5c7dcff-t6nwt\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.206592 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-config\") pod \"controller-manager-6cc5c7dcff-t6nwt\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.207714 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzzrj\" (UniqueName: \"kubernetes.io/projected/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-kube-api-access-gzzrj\") pod \"controller-manager-6cc5c7dcff-t6nwt\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.208185 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-config\") pod \"route-controller-manager-85968449f7-8p8cj\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.209190 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dntqh\" (UniqueName: \"kubernetes.io/projected/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-kube-api-access-dntqh\") pod \"route-controller-manager-85968449f7-8p8cj\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.209599 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-serving-cert\") pod \"route-controller-manager-85968449f7-8p8cj\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.210586 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-proxy-ca-bundles\") pod \"controller-manager-6cc5c7dcff-t6nwt\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.206074 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-client-ca\") pod \"route-controller-manager-85968449f7-8p8cj\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.207668 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-config\") pod \"controller-manager-6cc5c7dcff-t6nwt\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.206342 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-client-ca\") pod \"controller-manager-6cc5c7dcff-t6nwt\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.209108 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-config\") pod \"route-controller-manager-85968449f7-8p8cj\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.211365 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-serving-cert\") pod \"controller-manager-6cc5c7dcff-t6nwt\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.212139 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-proxy-ca-bundles\") pod \"controller-manager-6cc5c7dcff-t6nwt\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.223466 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-serving-cert\") pod \"controller-manager-6cc5c7dcff-t6nwt\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.225260 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-serving-cert\") pod \"route-controller-manager-85968449f7-8p8cj\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.227965 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzzrj\" (UniqueName: \"kubernetes.io/projected/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-kube-api-access-gzzrj\") pod \"controller-manager-6cc5c7dcff-t6nwt\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.228138 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dntqh\" (UniqueName: \"kubernetes.io/projected/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-kube-api-access-dntqh\") pod \"route-controller-manager-85968449f7-8p8cj\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.237747 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.249690 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.505804 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj"] Jan 21 13:08:11 crc kubenswrapper[4765]: W0121 13:08:11.507939 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e0335fd_66f2_4150_9ddb_0a85b7ec373c.slice/crio-80b8ebf24ed42770700c7386198074bbf009372ba60841e9b71b3c6a7d10e4a9 WatchSource:0}: Error finding container 80b8ebf24ed42770700c7386198074bbf009372ba60841e9b71b3c6a7d10e4a9: Status 404 returned error can't find the container with id 80b8ebf24ed42770700c7386198074bbf009372ba60841e9b71b3c6a7d10e4a9 Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.534523 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt"] Jan 21 13:08:11 crc kubenswrapper[4765]: W0121 13:08:11.538276 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c78205e_c9cf_49ab_a37d_09b03f5cdaf5.slice/crio-57109751ef3e6a394267edcf6b1cd12c6e52dc1e9748ab2d84af13faa5138147 WatchSource:0}: Error finding container 57109751ef3e6a394267edcf6b1cd12c6e52dc1e9748ab2d84af13faa5138147: Status 404 returned error can't find the container with id 57109751ef3e6a394267edcf6b1cd12c6e52dc1e9748ab2d84af13faa5138147 Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.621406 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6525c86b-8810-4639-8d16-93d25fac15a9" path="/var/lib/kubelet/pods/6525c86b-8810-4639-8d16-93d25fac15a9/volumes" Jan 21 13:08:11 crc kubenswrapper[4765]: I0121 13:08:11.622385 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc58cdb9-8e5c-426c-a193-994e3b2ce117" path="/var/lib/kubelet/pods/fc58cdb9-8e5c-426c-a193-994e3b2ce117/volumes" Jan 21 13:08:12 crc kubenswrapper[4765]: I0121 13:08:12.444046 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" event={"ID":"7e0335fd-66f2-4150-9ddb-0a85b7ec373c","Type":"ContainerStarted","Data":"52057e023d2b0e7797d526fd2e43bfe240f4a1005408523f74fe200c4111f79f"} Jan 21 13:08:12 crc kubenswrapper[4765]: I0121 13:08:12.444157 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" event={"ID":"7e0335fd-66f2-4150-9ddb-0a85b7ec373c","Type":"ContainerStarted","Data":"80b8ebf24ed42770700c7386198074bbf009372ba60841e9b71b3c6a7d10e4a9"} Jan 21 13:08:12 crc kubenswrapper[4765]: I0121 13:08:12.444746 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:12 crc kubenswrapper[4765]: I0121 13:08:12.446779 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" event={"ID":"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5","Type":"ContainerStarted","Data":"ee227909e6fe8b612b17e0c49f043bffba6e503a3bc7e6982c5100373c69fe54"} Jan 21 13:08:12 crc kubenswrapper[4765]: I0121 13:08:12.446862 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" event={"ID":"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5","Type":"ContainerStarted","Data":"57109751ef3e6a394267edcf6b1cd12c6e52dc1e9748ab2d84af13faa5138147"} Jan 21 13:08:12 crc kubenswrapper[4765]: I0121 13:08:12.453707 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:12 crc kubenswrapper[4765]: I0121 13:08:12.465933 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" podStartSLOduration=4.465905546 podStartE2EDuration="4.465905546s" podCreationTimestamp="2026-01-21 13:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:08:12.463481483 +0000 UTC m=+353.481207335" watchObservedRunningTime="2026-01-21 13:08:12.465905546 +0000 UTC m=+353.483631378" Jan 21 13:08:12 crc kubenswrapper[4765]: I0121 13:08:12.492307 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" podStartSLOduration=3.492244189 podStartE2EDuration="3.492244189s" podCreationTimestamp="2026-01-21 13:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:08:12.482033099 +0000 UTC m=+353.499758931" watchObservedRunningTime="2026-01-21 13:08:12.492244189 +0000 UTC m=+353.509970011" Jan 21 13:08:13 crc kubenswrapper[4765]: I0121 13:08:13.452977 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:13 crc kubenswrapper[4765]: I0121 13:08:13.458806 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:28 crc kubenswrapper[4765]: I0121 13:08:28.452256 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt"] Jan 21 13:08:28 crc kubenswrapper[4765]: I0121 13:08:28.453179 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" podUID="2c78205e-c9cf-49ab-a37d-09b03f5cdaf5" containerName="controller-manager" containerID="cri-o://ee227909e6fe8b612b17e0c49f043bffba6e503a3bc7e6982c5100373c69fe54" gracePeriod=30 Jan 21 13:08:28 crc kubenswrapper[4765]: I0121 13:08:28.478310 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj"] Jan 21 13:08:28 crc kubenswrapper[4765]: I0121 13:08:28.478559 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" podUID="7e0335fd-66f2-4150-9ddb-0a85b7ec373c" containerName="route-controller-manager" containerID="cri-o://52057e023d2b0e7797d526fd2e43bfe240f4a1005408523f74fe200c4111f79f" gracePeriod=30 Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.037108 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.134713 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.207808 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-config\") pod \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.207939 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-serving-cert\") pod \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.207977 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-client-ca\") pod \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.208026 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dntqh\" (UniqueName: \"kubernetes.io/projected/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-kube-api-access-dntqh\") pod \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\" (UID: \"7e0335fd-66f2-4150-9ddb-0a85b7ec373c\") " Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.208679 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-config" (OuterVolumeSpecName: "config") pod "7e0335fd-66f2-4150-9ddb-0a85b7ec373c" (UID: "7e0335fd-66f2-4150-9ddb-0a85b7ec373c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.208937 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-client-ca" (OuterVolumeSpecName: "client-ca") pod "7e0335fd-66f2-4150-9ddb-0a85b7ec373c" (UID: "7e0335fd-66f2-4150-9ddb-0a85b7ec373c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.213520 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7e0335fd-66f2-4150-9ddb-0a85b7ec373c" (UID: "7e0335fd-66f2-4150-9ddb-0a85b7ec373c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.213843 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-kube-api-access-dntqh" (OuterVolumeSpecName: "kube-api-access-dntqh") pod "7e0335fd-66f2-4150-9ddb-0a85b7ec373c" (UID: "7e0335fd-66f2-4150-9ddb-0a85b7ec373c"). InnerVolumeSpecName "kube-api-access-dntqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.309035 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-proxy-ca-bundles\") pod \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.309220 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-client-ca\") pod \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.309247 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzzrj\" (UniqueName: \"kubernetes.io/projected/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-kube-api-access-gzzrj\") pod \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.309277 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-serving-cert\") pod \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.309343 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-config\") pod \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\" (UID: \"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5\") " Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.309596 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.309609 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.309618 4765 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.309631 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dntqh\" (UniqueName: \"kubernetes.io/projected/7e0335fd-66f2-4150-9ddb-0a85b7ec373c-kube-api-access-dntqh\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.310241 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-client-ca" (OuterVolumeSpecName: "client-ca") pod "2c78205e-c9cf-49ab-a37d-09b03f5cdaf5" (UID: "2c78205e-c9cf-49ab-a37d-09b03f5cdaf5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.310364 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-config" (OuterVolumeSpecName: "config") pod "2c78205e-c9cf-49ab-a37d-09b03f5cdaf5" (UID: "2c78205e-c9cf-49ab-a37d-09b03f5cdaf5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.310760 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2c78205e-c9cf-49ab-a37d-09b03f5cdaf5" (UID: "2c78205e-c9cf-49ab-a37d-09b03f5cdaf5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.312408 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-kube-api-access-gzzrj" (OuterVolumeSpecName: "kube-api-access-gzzrj") pod "2c78205e-c9cf-49ab-a37d-09b03f5cdaf5" (UID: "2c78205e-c9cf-49ab-a37d-09b03f5cdaf5"). InnerVolumeSpecName "kube-api-access-gzzrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.312784 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2c78205e-c9cf-49ab-a37d-09b03f5cdaf5" (UID: "2c78205e-c9cf-49ab-a37d-09b03f5cdaf5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.410794 4765 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.410829 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzzrj\" (UniqueName: \"kubernetes.io/projected/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-kube-api-access-gzzrj\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.410841 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.410850 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.410859 4765 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.561586 4765 generic.go:334] "Generic (PLEG): container finished" podID="2c78205e-c9cf-49ab-a37d-09b03f5cdaf5" containerID="ee227909e6fe8b612b17e0c49f043bffba6e503a3bc7e6982c5100373c69fe54" exitCode=0 Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.561690 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" event={"ID":"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5","Type":"ContainerDied","Data":"ee227909e6fe8b612b17e0c49f043bffba6e503a3bc7e6982c5100373c69fe54"} Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.561732 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" event={"ID":"2c78205e-c9cf-49ab-a37d-09b03f5cdaf5","Type":"ContainerDied","Data":"57109751ef3e6a394267edcf6b1cd12c6e52dc1e9748ab2d84af13faa5138147"} Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.561753 4765 scope.go:117] "RemoveContainer" containerID="ee227909e6fe8b612b17e0c49f043bffba6e503a3bc7e6982c5100373c69fe54" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.562463 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.563288 4765 generic.go:334] "Generic (PLEG): container finished" podID="7e0335fd-66f2-4150-9ddb-0a85b7ec373c" containerID="52057e023d2b0e7797d526fd2e43bfe240f4a1005408523f74fe200c4111f79f" exitCode=0 Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.563367 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" event={"ID":"7e0335fd-66f2-4150-9ddb-0a85b7ec373c","Type":"ContainerDied","Data":"52057e023d2b0e7797d526fd2e43bfe240f4a1005408523f74fe200c4111f79f"} Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.563415 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" event={"ID":"7e0335fd-66f2-4150-9ddb-0a85b7ec373c","Type":"ContainerDied","Data":"80b8ebf24ed42770700c7386198074bbf009372ba60841e9b71b3c6a7d10e4a9"} Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.563327 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.577758 4765 scope.go:117] "RemoveContainer" containerID="ee227909e6fe8b612b17e0c49f043bffba6e503a3bc7e6982c5100373c69fe54" Jan 21 13:08:29 crc kubenswrapper[4765]: E0121 13:08:29.578439 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee227909e6fe8b612b17e0c49f043bffba6e503a3bc7e6982c5100373c69fe54\": container with ID starting with ee227909e6fe8b612b17e0c49f043bffba6e503a3bc7e6982c5100373c69fe54 not found: ID does not exist" containerID="ee227909e6fe8b612b17e0c49f043bffba6e503a3bc7e6982c5100373c69fe54" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.578499 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee227909e6fe8b612b17e0c49f043bffba6e503a3bc7e6982c5100373c69fe54"} err="failed to get container status \"ee227909e6fe8b612b17e0c49f043bffba6e503a3bc7e6982c5100373c69fe54\": rpc error: code = NotFound desc = could not find container \"ee227909e6fe8b612b17e0c49f043bffba6e503a3bc7e6982c5100373c69fe54\": container with ID starting with ee227909e6fe8b612b17e0c49f043bffba6e503a3bc7e6982c5100373c69fe54 not found: ID does not exist" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.578546 4765 scope.go:117] "RemoveContainer" containerID="52057e023d2b0e7797d526fd2e43bfe240f4a1005408523f74fe200c4111f79f" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.594660 4765 scope.go:117] "RemoveContainer" containerID="52057e023d2b0e7797d526fd2e43bfe240f4a1005408523f74fe200c4111f79f" Jan 21 13:08:29 crc kubenswrapper[4765]: E0121 13:08:29.595347 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52057e023d2b0e7797d526fd2e43bfe240f4a1005408523f74fe200c4111f79f\": container with ID starting with 52057e023d2b0e7797d526fd2e43bfe240f4a1005408523f74fe200c4111f79f not found: ID does not exist" containerID="52057e023d2b0e7797d526fd2e43bfe240f4a1005408523f74fe200c4111f79f" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.595404 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52057e023d2b0e7797d526fd2e43bfe240f4a1005408523f74fe200c4111f79f"} err="failed to get container status \"52057e023d2b0e7797d526fd2e43bfe240f4a1005408523f74fe200c4111f79f\": rpc error: code = NotFound desc = could not find container \"52057e023d2b0e7797d526fd2e43bfe240f4a1005408523f74fe200c4111f79f\": container with ID starting with 52057e023d2b0e7797d526fd2e43bfe240f4a1005408523f74fe200c4111f79f not found: ID does not exist" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.601571 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj"] Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.607490 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85968449f7-8p8cj"] Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.624235 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e0335fd-66f2-4150-9ddb-0a85b7ec373c" path="/var/lib/kubelet/pods/7e0335fd-66f2-4150-9ddb-0a85b7ec373c/volumes" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.624726 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt"] Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.624758 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cc5c7dcff-t6nwt"] Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.927854 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-2c99b"] Jan 21 13:08:29 crc kubenswrapper[4765]: E0121 13:08:29.928192 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c78205e-c9cf-49ab-a37d-09b03f5cdaf5" containerName="controller-manager" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.928224 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c78205e-c9cf-49ab-a37d-09b03f5cdaf5" containerName="controller-manager" Jan 21 13:08:29 crc kubenswrapper[4765]: E0121 13:08:29.928245 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e0335fd-66f2-4150-9ddb-0a85b7ec373c" containerName="route-controller-manager" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.928252 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0335fd-66f2-4150-9ddb-0a85b7ec373c" containerName="route-controller-manager" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.928355 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c78205e-c9cf-49ab-a37d-09b03f5cdaf5" containerName="controller-manager" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.928368 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e0335fd-66f2-4150-9ddb-0a85b7ec373c" containerName="route-controller-manager" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.928915 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.931637 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8"] Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.932703 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.933399 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.933408 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.933577 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.933715 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.934001 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.944058 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.945171 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.946129 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.946793 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.955897 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.956918 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.957650 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.992950 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 13:08:29 crc kubenswrapper[4765]: I0121 13:08:29.998291 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8"] Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.012758 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-2c99b"] Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.020293 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-client-ca\") pod \"controller-manager-5b85888b7c-2c99b\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.020558 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ddfec557-645d-4fa3-9545-38a78135a452-client-ca\") pod \"route-controller-manager-7484d9ddcc-qghj8\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.020744 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztbqg\" (UniqueName: \"kubernetes.io/projected/206a9b63-672c-479e-a3b3-6c59a0b0bc89-kube-api-access-ztbqg\") pod \"controller-manager-5b85888b7c-2c99b\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.020871 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddfec557-645d-4fa3-9545-38a78135a452-serving-cert\") pod \"route-controller-manager-7484d9ddcc-qghj8\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.021029 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kbqf\" (UniqueName: \"kubernetes.io/projected/ddfec557-645d-4fa3-9545-38a78135a452-kube-api-access-7kbqf\") pod \"route-controller-manager-7484d9ddcc-qghj8\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.021167 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddfec557-645d-4fa3-9545-38a78135a452-config\") pod \"route-controller-manager-7484d9ddcc-qghj8\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.021306 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-proxy-ca-bundles\") pod \"controller-manager-5b85888b7c-2c99b\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.021424 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-config\") pod \"controller-manager-5b85888b7c-2c99b\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.021574 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/206a9b63-672c-479e-a3b3-6c59a0b0bc89-serving-cert\") pod \"controller-manager-5b85888b7c-2c99b\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.122977 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztbqg\" (UniqueName: \"kubernetes.io/projected/206a9b63-672c-479e-a3b3-6c59a0b0bc89-kube-api-access-ztbqg\") pod \"controller-manager-5b85888b7c-2c99b\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.123764 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddfec557-645d-4fa3-9545-38a78135a452-serving-cert\") pod \"route-controller-manager-7484d9ddcc-qghj8\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.123869 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kbqf\" (UniqueName: \"kubernetes.io/projected/ddfec557-645d-4fa3-9545-38a78135a452-kube-api-access-7kbqf\") pod \"route-controller-manager-7484d9ddcc-qghj8\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.123966 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddfec557-645d-4fa3-9545-38a78135a452-config\") pod \"route-controller-manager-7484d9ddcc-qghj8\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.124059 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-proxy-ca-bundles\") pod \"controller-manager-5b85888b7c-2c99b\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.124137 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-config\") pod \"controller-manager-5b85888b7c-2c99b\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.124315 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/206a9b63-672c-479e-a3b3-6c59a0b0bc89-serving-cert\") pod \"controller-manager-5b85888b7c-2c99b\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.124416 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-client-ca\") pod \"controller-manager-5b85888b7c-2c99b\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.124520 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ddfec557-645d-4fa3-9545-38a78135a452-client-ca\") pod \"route-controller-manager-7484d9ddcc-qghj8\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.125344 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddfec557-645d-4fa3-9545-38a78135a452-config\") pod \"route-controller-manager-7484d9ddcc-qghj8\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.125459 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ddfec557-645d-4fa3-9545-38a78135a452-client-ca\") pod \"route-controller-manager-7484d9ddcc-qghj8\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.126185 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-client-ca\") pod \"controller-manager-5b85888b7c-2c99b\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.126343 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-config\") pod \"controller-manager-5b85888b7c-2c99b\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.126569 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-proxy-ca-bundles\") pod \"controller-manager-5b85888b7c-2c99b\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.129989 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/206a9b63-672c-479e-a3b3-6c59a0b0bc89-serving-cert\") pod \"controller-manager-5b85888b7c-2c99b\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.138329 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddfec557-645d-4fa3-9545-38a78135a452-serving-cert\") pod \"route-controller-manager-7484d9ddcc-qghj8\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.139473 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztbqg\" (UniqueName: \"kubernetes.io/projected/206a9b63-672c-479e-a3b3-6c59a0b0bc89-kube-api-access-ztbqg\") pod \"controller-manager-5b85888b7c-2c99b\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.141975 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kbqf\" (UniqueName: \"kubernetes.io/projected/ddfec557-645d-4fa3-9545-38a78135a452-kube-api-access-7kbqf\") pod \"route-controller-manager-7484d9ddcc-qghj8\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.261794 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.301569 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.629470 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8"] Jan 21 13:08:30 crc kubenswrapper[4765]: I0121 13:08:30.744025 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-2c99b"] Jan 21 13:08:31 crc kubenswrapper[4765]: I0121 13:08:31.583014 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" event={"ID":"ddfec557-645d-4fa3-9545-38a78135a452","Type":"ContainerStarted","Data":"bbf6f8af01e577a5a4c56a80afc9e3326b4e4ea782a6f677d0721bd988a4e767"} Jan 21 13:08:31 crc kubenswrapper[4765]: I0121 13:08:31.583393 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" event={"ID":"ddfec557-645d-4fa3-9545-38a78135a452","Type":"ContainerStarted","Data":"34c854afaf702abeedd73717377e44c50b7f81b362d5ec62cb8a5ed2eac15067"} Jan 21 13:08:31 crc kubenswrapper[4765]: I0121 13:08:31.584763 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:31 crc kubenswrapper[4765]: I0121 13:08:31.586679 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" event={"ID":"206a9b63-672c-479e-a3b3-6c59a0b0bc89","Type":"ContainerStarted","Data":"c4c73ac6e9f1058ea62537fea8fdb6c3561e0fadcac2f49dfe3fef6176b007eb"} Jan 21 13:08:31 crc kubenswrapper[4765]: I0121 13:08:31.586727 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" event={"ID":"206a9b63-672c-479e-a3b3-6c59a0b0bc89","Type":"ContainerStarted","Data":"fc8b61a810b5cbcda1db6e203e1f78fb53f6f51aca3e9da79a1ee621f0e1d753"} Jan 21 13:08:31 crc kubenswrapper[4765]: I0121 13:08:31.586877 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:31 crc kubenswrapper[4765]: I0121 13:08:31.591196 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:31 crc kubenswrapper[4765]: I0121 13:08:31.591320 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:31 crc kubenswrapper[4765]: I0121 13:08:31.602946 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" podStartSLOduration=3.602915978 podStartE2EDuration="3.602915978s" podCreationTimestamp="2026-01-21 13:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:08:31.601618183 +0000 UTC m=+372.619344005" watchObservedRunningTime="2026-01-21 13:08:31.602915978 +0000 UTC m=+372.620641800" Jan 21 13:08:31 crc kubenswrapper[4765]: I0121 13:08:31.621048 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c78205e-c9cf-49ab-a37d-09b03f5cdaf5" path="/var/lib/kubelet/pods/2c78205e-c9cf-49ab-a37d-09b03f5cdaf5/volumes" Jan 21 13:08:31 crc kubenswrapper[4765]: I0121 13:08:31.652952 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" podStartSLOduration=3.652924372 podStartE2EDuration="3.652924372s" podCreationTimestamp="2026-01-21 13:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:08:31.628002968 +0000 UTC m=+372.645728810" watchObservedRunningTime="2026-01-21 13:08:31.652924372 +0000 UTC m=+372.670650194" Jan 21 13:08:35 crc kubenswrapper[4765]: I0121 13:08:35.604531 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" podUID="1d2560a8-7f01-4b0b-b05a-443fc3be98d1" containerName="oauth-openshift" containerID="cri-o://1162d5d0f2b2a1a13f23c9a9939887f6946a8055a0eef336560bf18a8121dd48" gracePeriod=15 Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.535109 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.575454 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-bd7987fd5-s924d"] Jan 21 13:08:36 crc kubenswrapper[4765]: E0121 13:08:36.575746 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d2560a8-7f01-4b0b-b05a-443fc3be98d1" containerName="oauth-openshift" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.575761 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d2560a8-7f01-4b0b-b05a-443fc3be98d1" containerName="oauth-openshift" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.575854 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d2560a8-7f01-4b0b-b05a-443fc3be98d1" containerName="oauth-openshift" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.576313 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.606371 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-bd7987fd5-s924d"] Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.618550 4765 generic.go:334] "Generic (PLEG): container finished" podID="1d2560a8-7f01-4b0b-b05a-443fc3be98d1" containerID="1162d5d0f2b2a1a13f23c9a9939887f6946a8055a0eef336560bf18a8121dd48" exitCode=0 Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.618616 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" event={"ID":"1d2560a8-7f01-4b0b-b05a-443fc3be98d1","Type":"ContainerDied","Data":"1162d5d0f2b2a1a13f23c9a9939887f6946a8055a0eef336560bf18a8121dd48"} Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.618659 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" event={"ID":"1d2560a8-7f01-4b0b-b05a-443fc3be98d1","Type":"ContainerDied","Data":"16dfe1d0f3605a76ddde0dce78c735a5b6dbe272949af43c6492cafb1b15a928"} Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.618683 4765 scope.go:117] "RemoveContainer" containerID="1162d5d0f2b2a1a13f23c9a9939887f6946a8055a0eef336560bf18a8121dd48" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.618683 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xfs5k" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.623563 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-error\") pod \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.623609 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-audit-dir\") pod \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.623630 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-cliconfig\") pod \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.623653 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-session\") pod \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.623672 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-ocp-branding-template\") pod \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.623711 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-idp-0-file-data\") pod \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.623731 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-router-certs\") pod \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.623749 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-serving-cert\") pod \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.623769 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-service-ca\") pod \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.623808 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcf77\" (UniqueName: \"kubernetes.io/projected/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-kube-api-access-mcf77\") pod \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.623936 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "1d2560a8-7f01-4b0b-b05a-443fc3be98d1" (UID: "1d2560a8-7f01-4b0b-b05a-443fc3be98d1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.624900 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-provider-selection\") pod \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.624947 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-trusted-ca-bundle\") pod \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.625264 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.625291 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-user-template-login\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.625309 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-router-certs\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.625326 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-service-ca\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.625359 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-session\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.625384 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.625405 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.625431 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh8l5\" (UniqueName: \"kubernetes.io/projected/18bc8f68-39c6-4640-98ed-c2233b61f9a7-kube-api-access-kh8l5\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.625503 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.625529 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/18bc8f68-39c6-4640-98ed-c2233b61f9a7-audit-dir\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.625733 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.625762 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/18bc8f68-39c6-4640-98ed-c2233b61f9a7-audit-policies\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.625783 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-user-template-error\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.625816 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.625851 4765 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.626325 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "1d2560a8-7f01-4b0b-b05a-443fc3be98d1" (UID: "1d2560a8-7f01-4b0b-b05a-443fc3be98d1"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.640120 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-kube-api-access-mcf77" (OuterVolumeSpecName: "kube-api-access-mcf77") pod "1d2560a8-7f01-4b0b-b05a-443fc3be98d1" (UID: "1d2560a8-7f01-4b0b-b05a-443fc3be98d1"). InnerVolumeSpecName "kube-api-access-mcf77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.640728 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "1d2560a8-7f01-4b0b-b05a-443fc3be98d1" (UID: "1d2560a8-7f01-4b0b-b05a-443fc3be98d1"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.641167 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "1d2560a8-7f01-4b0b-b05a-443fc3be98d1" (UID: "1d2560a8-7f01-4b0b-b05a-443fc3be98d1"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.641535 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "1d2560a8-7f01-4b0b-b05a-443fc3be98d1" (UID: "1d2560a8-7f01-4b0b-b05a-443fc3be98d1"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.642725 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "1d2560a8-7f01-4b0b-b05a-443fc3be98d1" (UID: "1d2560a8-7f01-4b0b-b05a-443fc3be98d1"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.643446 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "1d2560a8-7f01-4b0b-b05a-443fc3be98d1" (UID: "1d2560a8-7f01-4b0b-b05a-443fc3be98d1"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.643612 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "1d2560a8-7f01-4b0b-b05a-443fc3be98d1" (UID: "1d2560a8-7f01-4b0b-b05a-443fc3be98d1"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.649312 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "1d2560a8-7f01-4b0b-b05a-443fc3be98d1" (UID: "1d2560a8-7f01-4b0b-b05a-443fc3be98d1"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.660671 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "1d2560a8-7f01-4b0b-b05a-443fc3be98d1" (UID: "1d2560a8-7f01-4b0b-b05a-443fc3be98d1"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.664885 4765 scope.go:117] "RemoveContainer" containerID="1162d5d0f2b2a1a13f23c9a9939887f6946a8055a0eef336560bf18a8121dd48" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.665252 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "1d2560a8-7f01-4b0b-b05a-443fc3be98d1" (UID: "1d2560a8-7f01-4b0b-b05a-443fc3be98d1"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:08:36 crc kubenswrapper[4765]: E0121 13:08:36.666380 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1162d5d0f2b2a1a13f23c9a9939887f6946a8055a0eef336560bf18a8121dd48\": container with ID starting with 1162d5d0f2b2a1a13f23c9a9939887f6946a8055a0eef336560bf18a8121dd48 not found: ID does not exist" containerID="1162d5d0f2b2a1a13f23c9a9939887f6946a8055a0eef336560bf18a8121dd48" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.666452 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1162d5d0f2b2a1a13f23c9a9939887f6946a8055a0eef336560bf18a8121dd48"} err="failed to get container status \"1162d5d0f2b2a1a13f23c9a9939887f6946a8055a0eef336560bf18a8121dd48\": rpc error: code = NotFound desc = could not find container \"1162d5d0f2b2a1a13f23c9a9939887f6946a8055a0eef336560bf18a8121dd48\": container with ID starting with 1162d5d0f2b2a1a13f23c9a9939887f6946a8055a0eef336560bf18a8121dd48 not found: ID does not exist" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727197 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-login\") pod \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727293 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-audit-policies\") pod \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\" (UID: \"1d2560a8-7f01-4b0b-b05a-443fc3be98d1\") " Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727507 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-session\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727536 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727559 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727584 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh8l5\" (UniqueName: \"kubernetes.io/projected/18bc8f68-39c6-4640-98ed-c2233b61f9a7-kube-api-access-kh8l5\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727625 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727652 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/18bc8f68-39c6-4640-98ed-c2233b61f9a7-audit-dir\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727680 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727701 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/18bc8f68-39c6-4640-98ed-c2233b61f9a7-audit-policies\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727724 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-user-template-error\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727748 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727768 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727789 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-user-template-login\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727813 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-router-certs\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727832 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-service-ca\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727873 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcf77\" (UniqueName: \"kubernetes.io/projected/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-kube-api-access-mcf77\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727888 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727903 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727915 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727927 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727937 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727948 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727961 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727971 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727982 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.727994 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.729090 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-service-ca\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.729102 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "1d2560a8-7f01-4b0b-b05a-443fc3be98d1" (UID: "1d2560a8-7f01-4b0b-b05a-443fc3be98d1"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.729788 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/18bc8f68-39c6-4640-98ed-c2233b61f9a7-audit-policies\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.729854 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/18bc8f68-39c6-4640-98ed-c2233b61f9a7-audit-dir\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.730293 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.731186 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.732839 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-session\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.735633 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-router-certs\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.741062 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.741838 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.745833 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-user-template-login\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.748689 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "1d2560a8-7f01-4b0b-b05a-443fc3be98d1" (UID: "1d2560a8-7f01-4b0b-b05a-443fc3be98d1"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.749418 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-user-template-error\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.749609 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.751829 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/18bc8f68-39c6-4640-98ed-c2233b61f9a7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.762783 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh8l5\" (UniqueName: \"kubernetes.io/projected/18bc8f68-39c6-4640-98ed-c2233b61f9a7-kube-api-access-kh8l5\") pod \"oauth-openshift-bd7987fd5-s924d\" (UID: \"18bc8f68-39c6-4640-98ed-c2233b61f9a7\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.830041 4765 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.830081 4765 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d2560a8-7f01-4b0b-b05a-443fc3be98d1-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.892776 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.952485 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xfs5k"] Jan 21 13:08:36 crc kubenswrapper[4765]: I0121 13:08:36.957460 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xfs5k"] Jan 21 13:08:37 crc kubenswrapper[4765]: I0121 13:08:37.383546 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-bd7987fd5-s924d"] Jan 21 13:08:37 crc kubenswrapper[4765]: I0121 13:08:37.622583 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d2560a8-7f01-4b0b-b05a-443fc3be98d1" path="/var/lib/kubelet/pods/1d2560a8-7f01-4b0b-b05a-443fc3be98d1/volumes" Jan 21 13:08:37 crc kubenswrapper[4765]: I0121 13:08:37.627094 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" event={"ID":"18bc8f68-39c6-4640-98ed-c2233b61f9a7","Type":"ContainerStarted","Data":"6ddf16c2b51c8b427784b6371cee5cb970c176ac410799a8374366c67cd0168a"} Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.258669 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-2c99b"] Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.259357 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" podUID="206a9b63-672c-479e-a3b3-6c59a0b0bc89" containerName="controller-manager" containerID="cri-o://c4c73ac6e9f1058ea62537fea8fdb6c3561e0fadcac2f49dfe3fef6176b007eb" gracePeriod=30 Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.634858 4765 generic.go:334] "Generic (PLEG): container finished" podID="206a9b63-672c-479e-a3b3-6c59a0b0bc89" containerID="c4c73ac6e9f1058ea62537fea8fdb6c3561e0fadcac2f49dfe3fef6176b007eb" exitCode=0 Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.634956 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" event={"ID":"206a9b63-672c-479e-a3b3-6c59a0b0bc89","Type":"ContainerDied","Data":"c4c73ac6e9f1058ea62537fea8fdb6c3561e0fadcac2f49dfe3fef6176b007eb"} Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.636655 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" event={"ID":"18bc8f68-39c6-4640-98ed-c2233b61f9a7","Type":"ContainerStarted","Data":"c0e8ca35cfb1e72ca0ff96a5fb9a7a68930062ab26e1a5f13cd2da6a530c269f"} Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.638253 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.644488 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.699636 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-bd7987fd5-s924d" podStartSLOduration=28.69961003 podStartE2EDuration="28.69961003s" podCreationTimestamp="2026-01-21 13:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:08:38.665657616 +0000 UTC m=+379.683383448" watchObservedRunningTime="2026-01-21 13:08:38.69961003 +0000 UTC m=+379.717335852" Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.796314 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.872896 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-config\") pod \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.872962 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-client-ca\") pod \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.872989 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/206a9b63-672c-479e-a3b3-6c59a0b0bc89-serving-cert\") pod \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.873029 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-proxy-ca-bundles\") pod \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.873076 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztbqg\" (UniqueName: \"kubernetes.io/projected/206a9b63-672c-479e-a3b3-6c59a0b0bc89-kube-api-access-ztbqg\") pod \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\" (UID: \"206a9b63-672c-479e-a3b3-6c59a0b0bc89\") " Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.873882 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-config" (OuterVolumeSpecName: "config") pod "206a9b63-672c-479e-a3b3-6c59a0b0bc89" (UID: "206a9b63-672c-479e-a3b3-6c59a0b0bc89"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.874406 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-client-ca" (OuterVolumeSpecName: "client-ca") pod "206a9b63-672c-479e-a3b3-6c59a0b0bc89" (UID: "206a9b63-672c-479e-a3b3-6c59a0b0bc89"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.874481 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "206a9b63-672c-479e-a3b3-6c59a0b0bc89" (UID: "206a9b63-672c-479e-a3b3-6c59a0b0bc89"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.878421 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206a9b63-672c-479e-a3b3-6c59a0b0bc89-kube-api-access-ztbqg" (OuterVolumeSpecName: "kube-api-access-ztbqg") pod "206a9b63-672c-479e-a3b3-6c59a0b0bc89" (UID: "206a9b63-672c-479e-a3b3-6c59a0b0bc89"). InnerVolumeSpecName "kube-api-access-ztbqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.884898 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/206a9b63-672c-479e-a3b3-6c59a0b0bc89-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "206a9b63-672c-479e-a3b3-6c59a0b0bc89" (UID: "206a9b63-672c-479e-a3b3-6c59a0b0bc89"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.973939 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.973975 4765 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.973986 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/206a9b63-672c-479e-a3b3-6c59a0b0bc89-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.973995 4765 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/206a9b63-672c-479e-a3b3-6c59a0b0bc89-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:38 crc kubenswrapper[4765]: I0121 13:08:38.974008 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztbqg\" (UniqueName: \"kubernetes.io/projected/206a9b63-672c-479e-a3b3-6c59a0b0bc89-kube-api-access-ztbqg\") on node \"crc\" DevicePath \"\"" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.644031 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" event={"ID":"206a9b63-672c-479e-a3b3-6c59a0b0bc89","Type":"ContainerDied","Data":"fc8b61a810b5cbcda1db6e203e1f78fb53f6f51aca3e9da79a1ee621f0e1d753"} Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.644070 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85888b7c-2c99b" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.644114 4765 scope.go:117] "RemoveContainer" containerID="c4c73ac6e9f1058ea62537fea8fdb6c3561e0fadcac2f49dfe3fef6176b007eb" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.682138 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-2c99b"] Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.685709 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b85888b7c-2c99b"] Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.934069 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5"] Jan 21 13:08:39 crc kubenswrapper[4765]: E0121 13:08:39.934402 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206a9b63-672c-479e-a3b3-6c59a0b0bc89" containerName="controller-manager" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.934421 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="206a9b63-672c-479e-a3b3-6c59a0b0bc89" containerName="controller-manager" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.934523 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="206a9b63-672c-479e-a3b3-6c59a0b0bc89" containerName="controller-manager" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.935026 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.941254 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.943445 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.945548 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.945683 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.946110 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.951937 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.955009 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.966545 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5"] Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.987363 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee42451a-cffb-4bb3-a815-41bb7baaab3e-config\") pod \"controller-manager-6cc5c7dcff-7nvk5\" (UID: \"ee42451a-cffb-4bb3-a815-41bb7baaab3e\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.987415 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt4xs\" (UniqueName: \"kubernetes.io/projected/ee42451a-cffb-4bb3-a815-41bb7baaab3e-kube-api-access-qt4xs\") pod \"controller-manager-6cc5c7dcff-7nvk5\" (UID: \"ee42451a-cffb-4bb3-a815-41bb7baaab3e\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.987450 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee42451a-cffb-4bb3-a815-41bb7baaab3e-proxy-ca-bundles\") pod \"controller-manager-6cc5c7dcff-7nvk5\" (UID: \"ee42451a-cffb-4bb3-a815-41bb7baaab3e\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.987483 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee42451a-cffb-4bb3-a815-41bb7baaab3e-serving-cert\") pod \"controller-manager-6cc5c7dcff-7nvk5\" (UID: \"ee42451a-cffb-4bb3-a815-41bb7baaab3e\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:39 crc kubenswrapper[4765]: I0121 13:08:39.987520 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee42451a-cffb-4bb3-a815-41bb7baaab3e-client-ca\") pod \"controller-manager-6cc5c7dcff-7nvk5\" (UID: \"ee42451a-cffb-4bb3-a815-41bb7baaab3e\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:40 crc kubenswrapper[4765]: I0121 13:08:40.088684 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee42451a-cffb-4bb3-a815-41bb7baaab3e-config\") pod \"controller-manager-6cc5c7dcff-7nvk5\" (UID: \"ee42451a-cffb-4bb3-a815-41bb7baaab3e\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:40 crc kubenswrapper[4765]: I0121 13:08:40.089061 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt4xs\" (UniqueName: \"kubernetes.io/projected/ee42451a-cffb-4bb3-a815-41bb7baaab3e-kube-api-access-qt4xs\") pod \"controller-manager-6cc5c7dcff-7nvk5\" (UID: \"ee42451a-cffb-4bb3-a815-41bb7baaab3e\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:40 crc kubenswrapper[4765]: I0121 13:08:40.089150 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee42451a-cffb-4bb3-a815-41bb7baaab3e-proxy-ca-bundles\") pod \"controller-manager-6cc5c7dcff-7nvk5\" (UID: \"ee42451a-cffb-4bb3-a815-41bb7baaab3e\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:40 crc kubenswrapper[4765]: I0121 13:08:40.089261 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee42451a-cffb-4bb3-a815-41bb7baaab3e-serving-cert\") pod \"controller-manager-6cc5c7dcff-7nvk5\" (UID: \"ee42451a-cffb-4bb3-a815-41bb7baaab3e\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:40 crc kubenswrapper[4765]: I0121 13:08:40.089371 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee42451a-cffb-4bb3-a815-41bb7baaab3e-client-ca\") pod \"controller-manager-6cc5c7dcff-7nvk5\" (UID: \"ee42451a-cffb-4bb3-a815-41bb7baaab3e\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:40 crc kubenswrapper[4765]: I0121 13:08:40.090362 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee42451a-cffb-4bb3-a815-41bb7baaab3e-client-ca\") pod \"controller-manager-6cc5c7dcff-7nvk5\" (UID: \"ee42451a-cffb-4bb3-a815-41bb7baaab3e\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:40 crc kubenswrapper[4765]: I0121 13:08:40.090422 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee42451a-cffb-4bb3-a815-41bb7baaab3e-config\") pod \"controller-manager-6cc5c7dcff-7nvk5\" (UID: \"ee42451a-cffb-4bb3-a815-41bb7baaab3e\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:40 crc kubenswrapper[4765]: I0121 13:08:40.091525 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee42451a-cffb-4bb3-a815-41bb7baaab3e-proxy-ca-bundles\") pod \"controller-manager-6cc5c7dcff-7nvk5\" (UID: \"ee42451a-cffb-4bb3-a815-41bb7baaab3e\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:40 crc kubenswrapper[4765]: I0121 13:08:40.107327 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee42451a-cffb-4bb3-a815-41bb7baaab3e-serving-cert\") pod \"controller-manager-6cc5c7dcff-7nvk5\" (UID: \"ee42451a-cffb-4bb3-a815-41bb7baaab3e\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:40 crc kubenswrapper[4765]: I0121 13:08:40.110559 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt4xs\" (UniqueName: \"kubernetes.io/projected/ee42451a-cffb-4bb3-a815-41bb7baaab3e-kube-api-access-qt4xs\") pod \"controller-manager-6cc5c7dcff-7nvk5\" (UID: \"ee42451a-cffb-4bb3-a815-41bb7baaab3e\") " pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:40 crc kubenswrapper[4765]: I0121 13:08:40.249735 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:40 crc kubenswrapper[4765]: I0121 13:08:40.721688 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5"] Jan 21 13:08:40 crc kubenswrapper[4765]: W0121 13:08:40.728355 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee42451a_cffb_4bb3_a815_41bb7baaab3e.slice/crio-02b1d4858d819f3f19831cf8b963af43f36661c839283533df2fa937941fe164 WatchSource:0}: Error finding container 02b1d4858d819f3f19831cf8b963af43f36661c839283533df2fa937941fe164: Status 404 returned error can't find the container with id 02b1d4858d819f3f19831cf8b963af43f36661c839283533df2fa937941fe164 Jan 21 13:08:41 crc kubenswrapper[4765]: I0121 13:08:41.623523 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="206a9b63-672c-479e-a3b3-6c59a0b0bc89" path="/var/lib/kubelet/pods/206a9b63-672c-479e-a3b3-6c59a0b0bc89/volumes" Jan 21 13:08:41 crc kubenswrapper[4765]: I0121 13:08:41.669069 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" event={"ID":"ee42451a-cffb-4bb3-a815-41bb7baaab3e","Type":"ContainerStarted","Data":"68bbd34544bbfe649c06a57d6bda968f96613d82ededeae2b0fa525e66753aa2"} Jan 21 13:08:41 crc kubenswrapper[4765]: I0121 13:08:41.669125 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" event={"ID":"ee42451a-cffb-4bb3-a815-41bb7baaab3e","Type":"ContainerStarted","Data":"02b1d4858d819f3f19831cf8b963af43f36661c839283533df2fa937941fe164"} Jan 21 13:08:41 crc kubenswrapper[4765]: I0121 13:08:41.669734 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:41 crc kubenswrapper[4765]: I0121 13:08:41.675472 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" Jan 21 13:08:41 crc kubenswrapper[4765]: I0121 13:08:41.691605 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cc5c7dcff-7nvk5" podStartSLOduration=3.691574435 podStartE2EDuration="3.691574435s" podCreationTimestamp="2026-01-21 13:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:08:41.6873427 +0000 UTC m=+382.705068522" watchObservedRunningTime="2026-01-21 13:08:41.691574435 +0000 UTC m=+382.709300257" Jan 21 13:08:44 crc kubenswrapper[4765]: I0121 13:08:44.446505 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:08:44 crc kubenswrapper[4765]: I0121 13:08:44.446605 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:08:45 crc kubenswrapper[4765]: I0121 13:08:45.565268 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-54n7h"] Jan 21 13:08:45 crc kubenswrapper[4765]: I0121 13:08:45.566772 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54n7h" Jan 21 13:08:45 crc kubenswrapper[4765]: I0121 13:08:45.569117 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 13:08:45 crc kubenswrapper[4765]: I0121 13:08:45.581153 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-54n7h"] Jan 21 13:08:45 crc kubenswrapper[4765]: I0121 13:08:45.672683 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/807f8e51-3f5b-4702-be3f-7fe335b54522-catalog-content\") pod \"redhat-operators-54n7h\" (UID: \"807f8e51-3f5b-4702-be3f-7fe335b54522\") " pod="openshift-marketplace/redhat-operators-54n7h" Jan 21 13:08:45 crc kubenswrapper[4765]: I0121 13:08:45.672763 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwhl7\" (UniqueName: \"kubernetes.io/projected/807f8e51-3f5b-4702-be3f-7fe335b54522-kube-api-access-wwhl7\") pod \"redhat-operators-54n7h\" (UID: \"807f8e51-3f5b-4702-be3f-7fe335b54522\") " pod="openshift-marketplace/redhat-operators-54n7h" Jan 21 13:08:45 crc kubenswrapper[4765]: I0121 13:08:45.672820 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/807f8e51-3f5b-4702-be3f-7fe335b54522-utilities\") pod \"redhat-operators-54n7h\" (UID: \"807f8e51-3f5b-4702-be3f-7fe335b54522\") " pod="openshift-marketplace/redhat-operators-54n7h" Jan 21 13:08:45 crc kubenswrapper[4765]: I0121 13:08:45.774690 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/807f8e51-3f5b-4702-be3f-7fe335b54522-catalog-content\") pod \"redhat-operators-54n7h\" (UID: \"807f8e51-3f5b-4702-be3f-7fe335b54522\") " pod="openshift-marketplace/redhat-operators-54n7h" Jan 21 13:08:45 crc kubenswrapper[4765]: I0121 13:08:45.774780 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwhl7\" (UniqueName: \"kubernetes.io/projected/807f8e51-3f5b-4702-be3f-7fe335b54522-kube-api-access-wwhl7\") pod \"redhat-operators-54n7h\" (UID: \"807f8e51-3f5b-4702-be3f-7fe335b54522\") " pod="openshift-marketplace/redhat-operators-54n7h" Jan 21 13:08:45 crc kubenswrapper[4765]: I0121 13:08:45.774825 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/807f8e51-3f5b-4702-be3f-7fe335b54522-utilities\") pod \"redhat-operators-54n7h\" (UID: \"807f8e51-3f5b-4702-be3f-7fe335b54522\") " pod="openshift-marketplace/redhat-operators-54n7h" Jan 21 13:08:45 crc kubenswrapper[4765]: I0121 13:08:45.775423 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/807f8e51-3f5b-4702-be3f-7fe335b54522-utilities\") pod \"redhat-operators-54n7h\" (UID: \"807f8e51-3f5b-4702-be3f-7fe335b54522\") " pod="openshift-marketplace/redhat-operators-54n7h" Jan 21 13:08:45 crc kubenswrapper[4765]: I0121 13:08:45.775426 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/807f8e51-3f5b-4702-be3f-7fe335b54522-catalog-content\") pod \"redhat-operators-54n7h\" (UID: \"807f8e51-3f5b-4702-be3f-7fe335b54522\") " pod="openshift-marketplace/redhat-operators-54n7h" Jan 21 13:08:45 crc kubenswrapper[4765]: I0121 13:08:45.794846 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwhl7\" (UniqueName: \"kubernetes.io/projected/807f8e51-3f5b-4702-be3f-7fe335b54522-kube-api-access-wwhl7\") pod \"redhat-operators-54n7h\" (UID: \"807f8e51-3f5b-4702-be3f-7fe335b54522\") " pod="openshift-marketplace/redhat-operators-54n7h" Jan 21 13:08:45 crc kubenswrapper[4765]: I0121 13:08:45.891012 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54n7h" Jan 21 13:08:46 crc kubenswrapper[4765]: I0121 13:08:46.348105 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-54n7h"] Jan 21 13:08:46 crc kubenswrapper[4765]: I0121 13:08:46.699257 4765 generic.go:334] "Generic (PLEG): container finished" podID="807f8e51-3f5b-4702-be3f-7fe335b54522" containerID="9f254f3cdb59002510445467c6d97e840d5ddada51cc2247dd08f0eee153fc34" exitCode=0 Jan 21 13:08:46 crc kubenswrapper[4765]: I0121 13:08:46.699315 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54n7h" event={"ID":"807f8e51-3f5b-4702-be3f-7fe335b54522","Type":"ContainerDied","Data":"9f254f3cdb59002510445467c6d97e840d5ddada51cc2247dd08f0eee153fc34"} Jan 21 13:08:46 crc kubenswrapper[4765]: I0121 13:08:46.699347 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54n7h" event={"ID":"807f8e51-3f5b-4702-be3f-7fe335b54522","Type":"ContainerStarted","Data":"8b489a7d903a6c3bc2e66d9252903c326d7e4ddbabc84da1d13c6ced19981bb0"} Jan 21 13:08:46 crc kubenswrapper[4765]: I0121 13:08:46.963841 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n2lvd"] Jan 21 13:08:46 crc kubenswrapper[4765]: I0121 13:08:46.965443 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2lvd" Jan 21 13:08:46 crc kubenswrapper[4765]: I0121 13:08:46.968763 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 13:08:46 crc kubenswrapper[4765]: I0121 13:08:46.977542 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2lvd"] Jan 21 13:08:47 crc kubenswrapper[4765]: I0121 13:08:47.096438 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd0d39b7-d9c4-4e89-a696-163f5f23eb76-utilities\") pod \"redhat-marketplace-n2lvd\" (UID: \"fd0d39b7-d9c4-4e89-a696-163f5f23eb76\") " pod="openshift-marketplace/redhat-marketplace-n2lvd" Jan 21 13:08:47 crc kubenswrapper[4765]: I0121 13:08:47.096571 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztj2v\" (UniqueName: \"kubernetes.io/projected/fd0d39b7-d9c4-4e89-a696-163f5f23eb76-kube-api-access-ztj2v\") pod \"redhat-marketplace-n2lvd\" (UID: \"fd0d39b7-d9c4-4e89-a696-163f5f23eb76\") " pod="openshift-marketplace/redhat-marketplace-n2lvd" Jan 21 13:08:47 crc kubenswrapper[4765]: I0121 13:08:47.096651 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd0d39b7-d9c4-4e89-a696-163f5f23eb76-catalog-content\") pod \"redhat-marketplace-n2lvd\" (UID: \"fd0d39b7-d9c4-4e89-a696-163f5f23eb76\") " pod="openshift-marketplace/redhat-marketplace-n2lvd" Jan 21 13:08:47 crc kubenswrapper[4765]: I0121 13:08:47.198071 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztj2v\" (UniqueName: \"kubernetes.io/projected/fd0d39b7-d9c4-4e89-a696-163f5f23eb76-kube-api-access-ztj2v\") pod \"redhat-marketplace-n2lvd\" (UID: \"fd0d39b7-d9c4-4e89-a696-163f5f23eb76\") " pod="openshift-marketplace/redhat-marketplace-n2lvd" Jan 21 13:08:47 crc kubenswrapper[4765]: I0121 13:08:47.198198 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd0d39b7-d9c4-4e89-a696-163f5f23eb76-catalog-content\") pod \"redhat-marketplace-n2lvd\" (UID: \"fd0d39b7-d9c4-4e89-a696-163f5f23eb76\") " pod="openshift-marketplace/redhat-marketplace-n2lvd" Jan 21 13:08:47 crc kubenswrapper[4765]: I0121 13:08:47.198326 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd0d39b7-d9c4-4e89-a696-163f5f23eb76-utilities\") pod \"redhat-marketplace-n2lvd\" (UID: \"fd0d39b7-d9c4-4e89-a696-163f5f23eb76\") " pod="openshift-marketplace/redhat-marketplace-n2lvd" Jan 21 13:08:47 crc kubenswrapper[4765]: I0121 13:08:47.199296 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd0d39b7-d9c4-4e89-a696-163f5f23eb76-utilities\") pod \"redhat-marketplace-n2lvd\" (UID: \"fd0d39b7-d9c4-4e89-a696-163f5f23eb76\") " pod="openshift-marketplace/redhat-marketplace-n2lvd" Jan 21 13:08:47 crc kubenswrapper[4765]: I0121 13:08:47.199782 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd0d39b7-d9c4-4e89-a696-163f5f23eb76-catalog-content\") pod \"redhat-marketplace-n2lvd\" (UID: \"fd0d39b7-d9c4-4e89-a696-163f5f23eb76\") " pod="openshift-marketplace/redhat-marketplace-n2lvd" Jan 21 13:08:47 crc kubenswrapper[4765]: I0121 13:08:47.221545 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztj2v\" (UniqueName: \"kubernetes.io/projected/fd0d39b7-d9c4-4e89-a696-163f5f23eb76-kube-api-access-ztj2v\") pod \"redhat-marketplace-n2lvd\" (UID: \"fd0d39b7-d9c4-4e89-a696-163f5f23eb76\") " pod="openshift-marketplace/redhat-marketplace-n2lvd" Jan 21 13:08:47 crc kubenswrapper[4765]: I0121 13:08:47.282789 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n2lvd" Jan 21 13:08:47 crc kubenswrapper[4765]: I0121 13:08:47.787015 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n2lvd"] Jan 21 13:08:47 crc kubenswrapper[4765]: I0121 13:08:47.982022 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fskff"] Jan 21 13:08:47 crc kubenswrapper[4765]: I0121 13:08:47.983319 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fskff" Jan 21 13:08:47 crc kubenswrapper[4765]: I0121 13:08:47.988101 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.015408 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fskff"] Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.111716 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfc4c\" (UniqueName: \"kubernetes.io/projected/f231dd53-72c3-4d70-879f-d840f959c6c6-kube-api-access-gfc4c\") pod \"community-operators-fskff\" (UID: \"f231dd53-72c3-4d70-879f-d840f959c6c6\") " pod="openshift-marketplace/community-operators-fskff" Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.111821 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f231dd53-72c3-4d70-879f-d840f959c6c6-catalog-content\") pod \"community-operators-fskff\" (UID: \"f231dd53-72c3-4d70-879f-d840f959c6c6\") " pod="openshift-marketplace/community-operators-fskff" Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.111874 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f231dd53-72c3-4d70-879f-d840f959c6c6-utilities\") pod \"community-operators-fskff\" (UID: \"f231dd53-72c3-4d70-879f-d840f959c6c6\") " pod="openshift-marketplace/community-operators-fskff" Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.213359 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfc4c\" (UniqueName: \"kubernetes.io/projected/f231dd53-72c3-4d70-879f-d840f959c6c6-kube-api-access-gfc4c\") pod \"community-operators-fskff\" (UID: \"f231dd53-72c3-4d70-879f-d840f959c6c6\") " pod="openshift-marketplace/community-operators-fskff" Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.213442 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f231dd53-72c3-4d70-879f-d840f959c6c6-catalog-content\") pod \"community-operators-fskff\" (UID: \"f231dd53-72c3-4d70-879f-d840f959c6c6\") " pod="openshift-marketplace/community-operators-fskff" Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.213479 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f231dd53-72c3-4d70-879f-d840f959c6c6-utilities\") pod \"community-operators-fskff\" (UID: \"f231dd53-72c3-4d70-879f-d840f959c6c6\") " pod="openshift-marketplace/community-operators-fskff" Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.214080 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f231dd53-72c3-4d70-879f-d840f959c6c6-utilities\") pod \"community-operators-fskff\" (UID: \"f231dd53-72c3-4d70-879f-d840f959c6c6\") " pod="openshift-marketplace/community-operators-fskff" Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.214101 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f231dd53-72c3-4d70-879f-d840f959c6c6-catalog-content\") pod \"community-operators-fskff\" (UID: \"f231dd53-72c3-4d70-879f-d840f959c6c6\") " pod="openshift-marketplace/community-operators-fskff" Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.235770 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfc4c\" (UniqueName: \"kubernetes.io/projected/f231dd53-72c3-4d70-879f-d840f959c6c6-kube-api-access-gfc4c\") pod \"community-operators-fskff\" (UID: \"f231dd53-72c3-4d70-879f-d840f959c6c6\") " pod="openshift-marketplace/community-operators-fskff" Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.301145 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fskff" Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.712846 4765 generic.go:334] "Generic (PLEG): container finished" podID="807f8e51-3f5b-4702-be3f-7fe335b54522" containerID="03e6bc669d04d26d69a9127240b846691d63790849e8951fe7ec08881254380c" exitCode=0 Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.713301 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54n7h" event={"ID":"807f8e51-3f5b-4702-be3f-7fe335b54522","Type":"ContainerDied","Data":"03e6bc669d04d26d69a9127240b846691d63790849e8951fe7ec08881254380c"} Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.715831 4765 generic.go:334] "Generic (PLEG): container finished" podID="fd0d39b7-d9c4-4e89-a696-163f5f23eb76" containerID="fc38c94e1b668bbdba85dc8ed0b415d6de16f8e3cb2036ace37b61b9cc2fc814" exitCode=0 Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.715905 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2lvd" event={"ID":"fd0d39b7-d9c4-4e89-a696-163f5f23eb76","Type":"ContainerDied","Data":"fc38c94e1b668bbdba85dc8ed0b415d6de16f8e3cb2036ace37b61b9cc2fc814"} Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.715963 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2lvd" event={"ID":"fd0d39b7-d9c4-4e89-a696-163f5f23eb76","Type":"ContainerStarted","Data":"d0a2ab85f5698cb6b490910917561e3ac796f54551173cd263d3cef21b3dc55b"} Jan 21 13:08:48 crc kubenswrapper[4765]: I0121 13:08:48.747650 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fskff"] Jan 21 13:08:48 crc kubenswrapper[4765]: W0121 13:08:48.759960 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf231dd53_72c3_4d70_879f_d840f959c6c6.slice/crio-a71c909fd6760200ca4d22a2782ca8204123bde082f5ae10d24f085380a5633f WatchSource:0}: Error finding container a71c909fd6760200ca4d22a2782ca8204123bde082f5ae10d24f085380a5633f: Status 404 returned error can't find the container with id a71c909fd6760200ca4d22a2782ca8204123bde082f5ae10d24f085380a5633f Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.374554 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l5vxr"] Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.375726 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l5vxr" Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.377847 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.413827 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l5vxr"] Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.543181 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1290053f-ebc1-4a58-963a-333751e51945-catalog-content\") pod \"certified-operators-l5vxr\" (UID: \"1290053f-ebc1-4a58-963a-333751e51945\") " pod="openshift-marketplace/certified-operators-l5vxr" Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.543867 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1290053f-ebc1-4a58-963a-333751e51945-utilities\") pod \"certified-operators-l5vxr\" (UID: \"1290053f-ebc1-4a58-963a-333751e51945\") " pod="openshift-marketplace/certified-operators-l5vxr" Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.543912 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqw45\" (UniqueName: \"kubernetes.io/projected/1290053f-ebc1-4a58-963a-333751e51945-kube-api-access-gqw45\") pod \"certified-operators-l5vxr\" (UID: \"1290053f-ebc1-4a58-963a-333751e51945\") " pod="openshift-marketplace/certified-operators-l5vxr" Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.645498 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1290053f-ebc1-4a58-963a-333751e51945-catalog-content\") pod \"certified-operators-l5vxr\" (UID: \"1290053f-ebc1-4a58-963a-333751e51945\") " pod="openshift-marketplace/certified-operators-l5vxr" Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.645597 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1290053f-ebc1-4a58-963a-333751e51945-utilities\") pod \"certified-operators-l5vxr\" (UID: \"1290053f-ebc1-4a58-963a-333751e51945\") " pod="openshift-marketplace/certified-operators-l5vxr" Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.645624 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqw45\" (UniqueName: \"kubernetes.io/projected/1290053f-ebc1-4a58-963a-333751e51945-kube-api-access-gqw45\") pod \"certified-operators-l5vxr\" (UID: \"1290053f-ebc1-4a58-963a-333751e51945\") " pod="openshift-marketplace/certified-operators-l5vxr" Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.646928 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1290053f-ebc1-4a58-963a-333751e51945-catalog-content\") pod \"certified-operators-l5vxr\" (UID: \"1290053f-ebc1-4a58-963a-333751e51945\") " pod="openshift-marketplace/certified-operators-l5vxr" Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.647550 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1290053f-ebc1-4a58-963a-333751e51945-utilities\") pod \"certified-operators-l5vxr\" (UID: \"1290053f-ebc1-4a58-963a-333751e51945\") " pod="openshift-marketplace/certified-operators-l5vxr" Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.676445 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqw45\" (UniqueName: \"kubernetes.io/projected/1290053f-ebc1-4a58-963a-333751e51945-kube-api-access-gqw45\") pod \"certified-operators-l5vxr\" (UID: \"1290053f-ebc1-4a58-963a-333751e51945\") " pod="openshift-marketplace/certified-operators-l5vxr" Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.725923 4765 generic.go:334] "Generic (PLEG): container finished" podID="f231dd53-72c3-4d70-879f-d840f959c6c6" containerID="ba9059b6c0827a63ce4a585619ca02d1a7b5878fbb7d5ce30076e48c52d4c092" exitCode=0 Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.726016 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fskff" event={"ID":"f231dd53-72c3-4d70-879f-d840f959c6c6","Type":"ContainerDied","Data":"ba9059b6c0827a63ce4a585619ca02d1a7b5878fbb7d5ce30076e48c52d4c092"} Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.726054 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fskff" event={"ID":"f231dd53-72c3-4d70-879f-d840f959c6c6","Type":"ContainerStarted","Data":"a71c909fd6760200ca4d22a2782ca8204123bde082f5ae10d24f085380a5633f"} Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.727592 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l5vxr" Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.734991 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54n7h" event={"ID":"807f8e51-3f5b-4702-be3f-7fe335b54522","Type":"ContainerStarted","Data":"fe907e20acdcbb11b9c08dd559a75f1dec33e2cee6c5a50eeb630f644e96ada9"} Jan 21 13:08:49 crc kubenswrapper[4765]: I0121 13:08:49.815160 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-54n7h" podStartSLOduration=2.224342525 podStartE2EDuration="4.815097433s" podCreationTimestamp="2026-01-21 13:08:45 +0000 UTC" firstStartedPulling="2026-01-21 13:08:46.701569951 +0000 UTC m=+387.719295773" lastFinishedPulling="2026-01-21 13:08:49.292324869 +0000 UTC m=+390.310050681" observedRunningTime="2026-01-21 13:08:49.806968795 +0000 UTC m=+390.824694647" watchObservedRunningTime="2026-01-21 13:08:49.815097433 +0000 UTC m=+390.832823265" Jan 21 13:08:50 crc kubenswrapper[4765]: I0121 13:08:50.248585 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l5vxr"] Jan 21 13:08:50 crc kubenswrapper[4765]: I0121 13:08:50.742583 4765 generic.go:334] "Generic (PLEG): container finished" podID="1290053f-ebc1-4a58-963a-333751e51945" containerID="8d454fcc0bb53d409ee990665e9c618589d6a45a79537f328c1eb0d54f94aefe" exitCode=0 Jan 21 13:08:50 crc kubenswrapper[4765]: I0121 13:08:50.742719 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5vxr" event={"ID":"1290053f-ebc1-4a58-963a-333751e51945","Type":"ContainerDied","Data":"8d454fcc0bb53d409ee990665e9c618589d6a45a79537f328c1eb0d54f94aefe"} Jan 21 13:08:50 crc kubenswrapper[4765]: I0121 13:08:50.743272 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5vxr" event={"ID":"1290053f-ebc1-4a58-963a-333751e51945","Type":"ContainerStarted","Data":"31322be1c4a9ce909cd598997100d636d1f3e674b498f2fdcd85a3055a2dc3cf"} Jan 21 13:08:50 crc kubenswrapper[4765]: I0121 13:08:50.745665 4765 generic.go:334] "Generic (PLEG): container finished" podID="fd0d39b7-d9c4-4e89-a696-163f5f23eb76" containerID="0d798a1d46a9cb3dd28872d82dba6c9a2c7371ce9dbf9cd7e59f276a3391561e" exitCode=0 Jan 21 13:08:50 crc kubenswrapper[4765]: I0121 13:08:50.745721 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2lvd" event={"ID":"fd0d39b7-d9c4-4e89-a696-163f5f23eb76","Type":"ContainerDied","Data":"0d798a1d46a9cb3dd28872d82dba6c9a2c7371ce9dbf9cd7e59f276a3391561e"} Jan 21 13:08:50 crc kubenswrapper[4765]: I0121 13:08:50.749352 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fskff" event={"ID":"f231dd53-72c3-4d70-879f-d840f959c6c6","Type":"ContainerStarted","Data":"59a1bc4c7860d55371fbf3dbf27fe085c06e953614e459f7a9c27940dc990c64"} Jan 21 13:08:50 crc kubenswrapper[4765]: I0121 13:08:50.916434 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2cwx9"] Jan 21 13:08:50 crc kubenswrapper[4765]: I0121 13:08:50.917818 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:50 crc kubenswrapper[4765]: I0121 13:08:50.960373 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2cwx9"] Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.066892 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa362580-ffe9-40d9-a338-ed618e1011fb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.067164 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa362580-ffe9-40d9-a338-ed618e1011fb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.067349 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa362580-ffe9-40d9-a338-ed618e1011fb-registry-tls\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.067446 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s96s2\" (UniqueName: \"kubernetes.io/projected/fa362580-ffe9-40d9-a338-ed618e1011fb-kube-api-access-s96s2\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.067555 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa362580-ffe9-40d9-a338-ed618e1011fb-bound-sa-token\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.067778 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.067852 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa362580-ffe9-40d9-a338-ed618e1011fb-registry-certificates\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.067912 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa362580-ffe9-40d9-a338-ed618e1011fb-trusted-ca\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.088140 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.170001 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa362580-ffe9-40d9-a338-ed618e1011fb-bound-sa-token\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.170078 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa362580-ffe9-40d9-a338-ed618e1011fb-registry-certificates\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.170116 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa362580-ffe9-40d9-a338-ed618e1011fb-trusted-ca\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.170172 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa362580-ffe9-40d9-a338-ed618e1011fb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.170226 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa362580-ffe9-40d9-a338-ed618e1011fb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.170269 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa362580-ffe9-40d9-a338-ed618e1011fb-registry-tls\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.170292 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s96s2\" (UniqueName: \"kubernetes.io/projected/fa362580-ffe9-40d9-a338-ed618e1011fb-kube-api-access-s96s2\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.171045 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa362580-ffe9-40d9-a338-ed618e1011fb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.171887 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa362580-ffe9-40d9-a338-ed618e1011fb-registry-certificates\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.173096 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa362580-ffe9-40d9-a338-ed618e1011fb-trusted-ca\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.176944 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa362580-ffe9-40d9-a338-ed618e1011fb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.177865 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa362580-ffe9-40d9-a338-ed618e1011fb-registry-tls\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.192305 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s96s2\" (UniqueName: \"kubernetes.io/projected/fa362580-ffe9-40d9-a338-ed618e1011fb-kube-api-access-s96s2\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.197670 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa362580-ffe9-40d9-a338-ed618e1011fb-bound-sa-token\") pod \"image-registry-66df7c8f76-2cwx9\" (UID: \"fa362580-ffe9-40d9-a338-ed618e1011fb\") " pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.234855 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.683141 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2cwx9"] Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.756538 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" event={"ID":"fa362580-ffe9-40d9-a338-ed618e1011fb","Type":"ContainerStarted","Data":"514dbe753b1327ef733cd7d36de081cd34734c258655f4e09c0c1e644c16addc"} Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.758526 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5vxr" event={"ID":"1290053f-ebc1-4a58-963a-333751e51945","Type":"ContainerStarted","Data":"7db80324b75062e25a9fe3f96ef751eb2076daa89e00e2c8be6f6a0510f4f147"} Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.763585 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n2lvd" event={"ID":"fd0d39b7-d9c4-4e89-a696-163f5f23eb76","Type":"ContainerStarted","Data":"8a848e7930ee1d5f57e8f51468fd2c7eda2b4e1192dfc46bb2f80314f554fb99"} Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.773196 4765 generic.go:334] "Generic (PLEG): container finished" podID="f231dd53-72c3-4d70-879f-d840f959c6c6" containerID="59a1bc4c7860d55371fbf3dbf27fe085c06e953614e459f7a9c27940dc990c64" exitCode=0 Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.774381 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fskff" event={"ID":"f231dd53-72c3-4d70-879f-d840f959c6c6","Type":"ContainerDied","Data":"59a1bc4c7860d55371fbf3dbf27fe085c06e953614e459f7a9c27940dc990c64"} Jan 21 13:08:51 crc kubenswrapper[4765]: I0121 13:08:51.800713 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n2lvd" podStartSLOduration=3.393635547 podStartE2EDuration="5.800683838s" podCreationTimestamp="2026-01-21 13:08:46 +0000 UTC" firstStartedPulling="2026-01-21 13:08:48.718542157 +0000 UTC m=+389.736267979" lastFinishedPulling="2026-01-21 13:08:51.125590448 +0000 UTC m=+392.143316270" observedRunningTime="2026-01-21 13:08:51.800362129 +0000 UTC m=+392.818087951" watchObservedRunningTime="2026-01-21 13:08:51.800683838 +0000 UTC m=+392.818409660" Jan 21 13:08:52 crc kubenswrapper[4765]: I0121 13:08:52.781379 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" event={"ID":"fa362580-ffe9-40d9-a338-ed618e1011fb","Type":"ContainerStarted","Data":"3528810d1f55c6e9bc74c6d803102bf387f3719a228856af547ad7d352cf6437"} Jan 21 13:08:52 crc kubenswrapper[4765]: I0121 13:08:52.781542 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:08:52 crc kubenswrapper[4765]: I0121 13:08:52.785313 4765 generic.go:334] "Generic (PLEG): container finished" podID="1290053f-ebc1-4a58-963a-333751e51945" containerID="7db80324b75062e25a9fe3f96ef751eb2076daa89e00e2c8be6f6a0510f4f147" exitCode=0 Jan 21 13:08:52 crc kubenswrapper[4765]: I0121 13:08:52.785404 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5vxr" event={"ID":"1290053f-ebc1-4a58-963a-333751e51945","Type":"ContainerDied","Data":"7db80324b75062e25a9fe3f96ef751eb2076daa89e00e2c8be6f6a0510f4f147"} Jan 21 13:08:52 crc kubenswrapper[4765]: I0121 13:08:52.788989 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fskff" event={"ID":"f231dd53-72c3-4d70-879f-d840f959c6c6","Type":"ContainerStarted","Data":"0187b3609897ec39b4b3d47bbd8cdca684eac03e796d7f96aa92127ad480c42f"} Jan 21 13:08:52 crc kubenswrapper[4765]: I0121 13:08:52.812886 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" podStartSLOduration=2.8128586589999998 podStartE2EDuration="2.812858659s" podCreationTimestamp="2026-01-21 13:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:08:52.80730532 +0000 UTC m=+393.825031152" watchObservedRunningTime="2026-01-21 13:08:52.812858659 +0000 UTC m=+393.830584481" Jan 21 13:08:52 crc kubenswrapper[4765]: I0121 13:08:52.832008 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fskff" podStartSLOduration=3.357126706 podStartE2EDuration="5.831986071s" podCreationTimestamp="2026-01-21 13:08:47 +0000 UTC" firstStartedPulling="2026-01-21 13:08:49.729949314 +0000 UTC m=+390.747675136" lastFinishedPulling="2026-01-21 13:08:52.204808679 +0000 UTC m=+393.222534501" observedRunningTime="2026-01-21 13:08:52.831530197 +0000 UTC m=+393.849256019" watchObservedRunningTime="2026-01-21 13:08:52.831986071 +0000 UTC m=+393.849711893" Jan 21 13:08:53 crc kubenswrapper[4765]: I0121 13:08:53.799680 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5vxr" event={"ID":"1290053f-ebc1-4a58-963a-333751e51945","Type":"ContainerStarted","Data":"5e47ce8a7801033b25891bea105e864746dae44a05668484ced60a0f101e61c1"} Jan 21 13:08:53 crc kubenswrapper[4765]: I0121 13:08:53.831738 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l5vxr" podStartSLOduration=2.293437262 podStartE2EDuration="4.831708114s" podCreationTimestamp="2026-01-21 13:08:49 +0000 UTC" firstStartedPulling="2026-01-21 13:08:50.74586837 +0000 UTC m=+391.763594202" lastFinishedPulling="2026-01-21 13:08:53.284139232 +0000 UTC m=+394.301865054" observedRunningTime="2026-01-21 13:08:53.826336131 +0000 UTC m=+394.844061963" watchObservedRunningTime="2026-01-21 13:08:53.831708114 +0000 UTC m=+394.849433946" Jan 21 13:08:55 crc kubenswrapper[4765]: I0121 13:08:55.891503 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-54n7h" Jan 21 13:08:55 crc kubenswrapper[4765]: I0121 13:08:55.891805 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-54n7h" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:55.964248 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-54n7h" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:56.864689 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-54n7h" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:57.283610 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n2lvd" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:57.283682 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n2lvd" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:57.346668 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n2lvd" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:57.870500 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n2lvd" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:58.238634 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8"] Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:58.239223 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" podUID="ddfec557-645d-4fa3-9545-38a78135a452" containerName="route-controller-manager" containerID="cri-o://bbf6f8af01e577a5a4c56a80afc9e3326b4e4ea782a6f677d0721bd988a4e767" gracePeriod=30 Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:58.302248 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fskff" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:58.302495 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fskff" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:58.344795 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fskff" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:58.891902 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fskff" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.727797 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l5vxr" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.728056 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l5vxr" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.772552 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l5vxr" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.840359 4765 generic.go:334] "Generic (PLEG): container finished" podID="ddfec557-645d-4fa3-9545-38a78135a452" containerID="bbf6f8af01e577a5a4c56a80afc9e3326b4e4ea782a6f677d0721bd988a4e767" exitCode=0 Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.840747 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" event={"ID":"ddfec557-645d-4fa3-9545-38a78135a452","Type":"ContainerDied","Data":"bbf6f8af01e577a5a4c56a80afc9e3326b4e4ea782a6f677d0721bd988a4e767"} Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.886128 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l5vxr" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.895230 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.922800 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ddfec557-645d-4fa3-9545-38a78135a452-client-ca\") pod \"ddfec557-645d-4fa3-9545-38a78135a452\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.922890 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddfec557-645d-4fa3-9545-38a78135a452-serving-cert\") pod \"ddfec557-645d-4fa3-9545-38a78135a452\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.922959 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kbqf\" (UniqueName: \"kubernetes.io/projected/ddfec557-645d-4fa3-9545-38a78135a452-kube-api-access-7kbqf\") pod \"ddfec557-645d-4fa3-9545-38a78135a452\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.923058 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddfec557-645d-4fa3-9545-38a78135a452-config\") pod \"ddfec557-645d-4fa3-9545-38a78135a452\" (UID: \"ddfec557-645d-4fa3-9545-38a78135a452\") " Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.925966 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddfec557-645d-4fa3-9545-38a78135a452-client-ca" (OuterVolumeSpecName: "client-ca") pod "ddfec557-645d-4fa3-9545-38a78135a452" (UID: "ddfec557-645d-4fa3-9545-38a78135a452"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.930490 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddfec557-645d-4fa3-9545-38a78135a452-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ddfec557-645d-4fa3-9545-38a78135a452" (UID: "ddfec557-645d-4fa3-9545-38a78135a452"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.932290 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddfec557-645d-4fa3-9545-38a78135a452-config" (OuterVolumeSpecName: "config") pod "ddfec557-645d-4fa3-9545-38a78135a452" (UID: "ddfec557-645d-4fa3-9545-38a78135a452"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.950999 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddfec557-645d-4fa3-9545-38a78135a452-kube-api-access-7kbqf" (OuterVolumeSpecName: "kube-api-access-7kbqf") pod "ddfec557-645d-4fa3-9545-38a78135a452" (UID: "ddfec557-645d-4fa3-9545-38a78135a452"). InnerVolumeSpecName "kube-api-access-7kbqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.978363 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2"] Jan 21 13:08:59 crc kubenswrapper[4765]: E0121 13:08:59.979344 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddfec557-645d-4fa3-9545-38a78135a452" containerName="route-controller-manager" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.979372 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddfec557-645d-4fa3-9545-38a78135a452" containerName="route-controller-manager" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.979622 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddfec557-645d-4fa3-9545-38a78135a452" containerName="route-controller-manager" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.980500 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:08:59 crc kubenswrapper[4765]: I0121 13:08:59.987544 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2"] Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.025187 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f7ff633-86ff-4b49-aecb-293c036b2073-config\") pod \"route-controller-manager-85968449f7-9hsn2\" (UID: \"3f7ff633-86ff-4b49-aecb-293c036b2073\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.025294 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f7ff633-86ff-4b49-aecb-293c036b2073-client-ca\") pod \"route-controller-manager-85968449f7-9hsn2\" (UID: \"3f7ff633-86ff-4b49-aecb-293c036b2073\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.025328 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7ff633-86ff-4b49-aecb-293c036b2073-serving-cert\") pod \"route-controller-manager-85968449f7-9hsn2\" (UID: \"3f7ff633-86ff-4b49-aecb-293c036b2073\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.025497 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkxbf\" (UniqueName: \"kubernetes.io/projected/3f7ff633-86ff-4b49-aecb-293c036b2073-kube-api-access-lkxbf\") pod \"route-controller-manager-85968449f7-9hsn2\" (UID: \"3f7ff633-86ff-4b49-aecb-293c036b2073\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.025576 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddfec557-645d-4fa3-9545-38a78135a452-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.025590 4765 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ddfec557-645d-4fa3-9545-38a78135a452-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.025602 4765 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddfec557-645d-4fa3-9545-38a78135a452-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.025631 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kbqf\" (UniqueName: \"kubernetes.io/projected/ddfec557-645d-4fa3-9545-38a78135a452-kube-api-access-7kbqf\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.127943 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f7ff633-86ff-4b49-aecb-293c036b2073-config\") pod \"route-controller-manager-85968449f7-9hsn2\" (UID: \"3f7ff633-86ff-4b49-aecb-293c036b2073\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.128035 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f7ff633-86ff-4b49-aecb-293c036b2073-client-ca\") pod \"route-controller-manager-85968449f7-9hsn2\" (UID: \"3f7ff633-86ff-4b49-aecb-293c036b2073\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.128066 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7ff633-86ff-4b49-aecb-293c036b2073-serving-cert\") pod \"route-controller-manager-85968449f7-9hsn2\" (UID: \"3f7ff633-86ff-4b49-aecb-293c036b2073\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.128121 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkxbf\" (UniqueName: \"kubernetes.io/projected/3f7ff633-86ff-4b49-aecb-293c036b2073-kube-api-access-lkxbf\") pod \"route-controller-manager-85968449f7-9hsn2\" (UID: \"3f7ff633-86ff-4b49-aecb-293c036b2073\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.129565 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f7ff633-86ff-4b49-aecb-293c036b2073-client-ca\") pod \"route-controller-manager-85968449f7-9hsn2\" (UID: \"3f7ff633-86ff-4b49-aecb-293c036b2073\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.131607 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f7ff633-86ff-4b49-aecb-293c036b2073-config\") pod \"route-controller-manager-85968449f7-9hsn2\" (UID: \"3f7ff633-86ff-4b49-aecb-293c036b2073\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.134294 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7ff633-86ff-4b49-aecb-293c036b2073-serving-cert\") pod \"route-controller-manager-85968449f7-9hsn2\" (UID: \"3f7ff633-86ff-4b49-aecb-293c036b2073\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.148960 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkxbf\" (UniqueName: \"kubernetes.io/projected/3f7ff633-86ff-4b49-aecb-293c036b2073-kube-api-access-lkxbf\") pod \"route-controller-manager-85968449f7-9hsn2\" (UID: \"3f7ff633-86ff-4b49-aecb-293c036b2073\") " pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.310824 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.763044 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2"] Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.860981 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" event={"ID":"3f7ff633-86ff-4b49-aecb-293c036b2073","Type":"ContainerStarted","Data":"a03b744136ddc62ccb30a8e4f696082db62c5dbad80a4c2676bb857527ad0063"} Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.863012 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" event={"ID":"ddfec557-645d-4fa3-9545-38a78135a452","Type":"ContainerDied","Data":"34c854afaf702abeedd73717377e44c50b7f81b362d5ec62cb8a5ed2eac15067"} Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.863126 4765 scope.go:117] "RemoveContainer" containerID="bbf6f8af01e577a5a4c56a80afc9e3326b4e4ea782a6f677d0721bd988a4e767" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.863258 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8" Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.898983 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8"] Jan 21 13:09:00 crc kubenswrapper[4765]: I0121 13:09:00.905698 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7484d9ddcc-qghj8"] Jan 21 13:09:01 crc kubenswrapper[4765]: I0121 13:09:01.631873 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddfec557-645d-4fa3-9545-38a78135a452" path="/var/lib/kubelet/pods/ddfec557-645d-4fa3-9545-38a78135a452/volumes" Jan 21 13:09:02 crc kubenswrapper[4765]: I0121 13:09:02.879922 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" event={"ID":"3f7ff633-86ff-4b49-aecb-293c036b2073","Type":"ContainerStarted","Data":"7306d078b46fd1ae75c380580eb34c60319dfc4c0434d6d6aad8808ae1f2f331"} Jan 21 13:09:02 crc kubenswrapper[4765]: I0121 13:09:02.880315 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:09:02 crc kubenswrapper[4765]: I0121 13:09:02.906361 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" podStartSLOduration=4.906331315 podStartE2EDuration="4.906331315s" podCreationTimestamp="2026-01-21 13:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:09:02.903738106 +0000 UTC m=+403.921463938" watchObservedRunningTime="2026-01-21 13:09:02.906331315 +0000 UTC m=+403.924057137" Jan 21 13:09:03 crc kubenswrapper[4765]: I0121 13:09:03.168883 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85968449f7-9hsn2" Jan 21 13:09:11 crc kubenswrapper[4765]: I0121 13:09:11.240874 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-2cwx9" Jan 21 13:09:11 crc kubenswrapper[4765]: I0121 13:09:11.300567 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2x4pn"] Jan 21 13:09:14 crc kubenswrapper[4765]: I0121 13:09:14.445824 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:09:14 crc kubenswrapper[4765]: I0121 13:09:14.446189 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:09:36 crc kubenswrapper[4765]: I0121 13:09:36.362814 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" podUID="5d4723c5-1628-4481-83b8-498fd4e5362e" containerName="registry" containerID="cri-o://46e1eca122573a704800301b2bbd932930edc1f239cdab9146563d896a2f94d4" gracePeriod=30 Jan 21 13:09:36 crc kubenswrapper[4765]: I0121 13:09:36.835656 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.002490 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d4723c5-1628-4481-83b8-498fd4e5362e-trusted-ca\") pod \"5d4723c5-1628-4481-83b8-498fd4e5362e\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.002718 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5d4723c5-1628-4481-83b8-498fd4e5362e\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.002782 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4trd\" (UniqueName: \"kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-kube-api-access-x4trd\") pod \"5d4723c5-1628-4481-83b8-498fd4e5362e\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.002920 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-registry-tls\") pod \"5d4723c5-1628-4481-83b8-498fd4e5362e\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.003022 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5d4723c5-1628-4481-83b8-498fd4e5362e-ca-trust-extracted\") pod \"5d4723c5-1628-4481-83b8-498fd4e5362e\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.003123 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5d4723c5-1628-4481-83b8-498fd4e5362e-installation-pull-secrets\") pod \"5d4723c5-1628-4481-83b8-498fd4e5362e\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.003158 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-bound-sa-token\") pod \"5d4723c5-1628-4481-83b8-498fd4e5362e\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.003332 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5d4723c5-1628-4481-83b8-498fd4e5362e-registry-certificates\") pod \"5d4723c5-1628-4481-83b8-498fd4e5362e\" (UID: \"5d4723c5-1628-4481-83b8-498fd4e5362e\") " Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.004939 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4723c5-1628-4481-83b8-498fd4e5362e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5d4723c5-1628-4481-83b8-498fd4e5362e" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.005237 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4723c5-1628-4481-83b8-498fd4e5362e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5d4723c5-1628-4481-83b8-498fd4e5362e" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.011906 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5d4723c5-1628-4481-83b8-498fd4e5362e" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.015393 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-kube-api-access-x4trd" (OuterVolumeSpecName: "kube-api-access-x4trd") pod "5d4723c5-1628-4481-83b8-498fd4e5362e" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e"). InnerVolumeSpecName "kube-api-access-x4trd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.017702 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d4723c5-1628-4481-83b8-498fd4e5362e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5d4723c5-1628-4481-83b8-498fd4e5362e" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.018271 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5d4723c5-1628-4481-83b8-498fd4e5362e" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.018818 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5d4723c5-1628-4481-83b8-498fd4e5362e" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.035821 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d4723c5-1628-4481-83b8-498fd4e5362e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5d4723c5-1628-4481-83b8-498fd4e5362e" (UID: "5d4723c5-1628-4481-83b8-498fd4e5362e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.087839 4765 generic.go:334] "Generic (PLEG): container finished" podID="5d4723c5-1628-4481-83b8-498fd4e5362e" containerID="46e1eca122573a704800301b2bbd932930edc1f239cdab9146563d896a2f94d4" exitCode=0 Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.087911 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" event={"ID":"5d4723c5-1628-4481-83b8-498fd4e5362e","Type":"ContainerDied","Data":"46e1eca122573a704800301b2bbd932930edc1f239cdab9146563d896a2f94d4"} Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.087936 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.087967 4765 scope.go:117] "RemoveContainer" containerID="46e1eca122573a704800301b2bbd932930edc1f239cdab9146563d896a2f94d4" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.087950 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2x4pn" event={"ID":"5d4723c5-1628-4481-83b8-498fd4e5362e","Type":"ContainerDied","Data":"956cb331c5054b32750caf488e622a8ea1c21600d9675dba8a7acaf574c434ca"} Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.104839 4765 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5d4723c5-1628-4481-83b8-498fd4e5362e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.105742 4765 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5d4723c5-1628-4481-83b8-498fd4e5362e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.105757 4765 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.105770 4765 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5d4723c5-1628-4481-83b8-498fd4e5362e-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.105787 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d4723c5-1628-4481-83b8-498fd4e5362e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.105799 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4trd\" (UniqueName: \"kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-kube-api-access-x4trd\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.105812 4765 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5d4723c5-1628-4481-83b8-498fd4e5362e-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.111597 4765 scope.go:117] "RemoveContainer" containerID="46e1eca122573a704800301b2bbd932930edc1f239cdab9146563d896a2f94d4" Jan 21 13:09:37 crc kubenswrapper[4765]: E0121 13:09:37.112107 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46e1eca122573a704800301b2bbd932930edc1f239cdab9146563d896a2f94d4\": container with ID starting with 46e1eca122573a704800301b2bbd932930edc1f239cdab9146563d896a2f94d4 not found: ID does not exist" containerID="46e1eca122573a704800301b2bbd932930edc1f239cdab9146563d896a2f94d4" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.112145 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46e1eca122573a704800301b2bbd932930edc1f239cdab9146563d896a2f94d4"} err="failed to get container status \"46e1eca122573a704800301b2bbd932930edc1f239cdab9146563d896a2f94d4\": rpc error: code = NotFound desc = could not find container \"46e1eca122573a704800301b2bbd932930edc1f239cdab9146563d896a2f94d4\": container with ID starting with 46e1eca122573a704800301b2bbd932930edc1f239cdab9146563d896a2f94d4 not found: ID does not exist" Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.141415 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2x4pn"] Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.147036 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2x4pn"] Jan 21 13:09:37 crc kubenswrapper[4765]: I0121 13:09:37.621517 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d4723c5-1628-4481-83b8-498fd4e5362e" path="/var/lib/kubelet/pods/5d4723c5-1628-4481-83b8-498fd4e5362e/volumes" Jan 21 13:09:44 crc kubenswrapper[4765]: I0121 13:09:44.446488 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:09:44 crc kubenswrapper[4765]: I0121 13:09:44.447849 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:09:44 crc kubenswrapper[4765]: I0121 13:09:44.447992 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:09:44 crc kubenswrapper[4765]: I0121 13:09:44.448968 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f7a5ac8d24692585ce478eff1513b2ab0b0e70857dfc544d9cfa881f0e004073"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:09:44 crc kubenswrapper[4765]: I0121 13:09:44.449132 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://f7a5ac8d24692585ce478eff1513b2ab0b0e70857dfc544d9cfa881f0e004073" gracePeriod=600 Jan 21 13:09:44 crc kubenswrapper[4765]: E0121 13:09:44.649808 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode149390c_e4da_4dfd_bed2_b14de058f921.slice/crio-conmon-f7a5ac8d24692585ce478eff1513b2ab0b0e70857dfc544d9cfa881f0e004073.scope\": RecentStats: unable to find data in memory cache]" Jan 21 13:09:45 crc kubenswrapper[4765]: I0121 13:09:45.142219 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="f7a5ac8d24692585ce478eff1513b2ab0b0e70857dfc544d9cfa881f0e004073" exitCode=0 Jan 21 13:09:45 crc kubenswrapper[4765]: I0121 13:09:45.142248 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"f7a5ac8d24692585ce478eff1513b2ab0b0e70857dfc544d9cfa881f0e004073"} Jan 21 13:09:45 crc kubenswrapper[4765]: I0121 13:09:45.142886 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"2a82e379790a817fc46a03244d0ba93ae43907a95b2b85581d0d985030ba55af"} Jan 21 13:09:45 crc kubenswrapper[4765]: I0121 13:09:45.142974 4765 scope.go:117] "RemoveContainer" containerID="0b2d59f60da075cec4e085865b2cb7e01d49a6fea4b771a680d9864a46ab02ae" Jan 21 13:11:44 crc kubenswrapper[4765]: I0121 13:11:44.446116 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:11:44 crc kubenswrapper[4765]: I0121 13:11:44.446827 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:12:14 crc kubenswrapper[4765]: I0121 13:12:14.446166 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:12:14 crc kubenswrapper[4765]: I0121 13:12:14.447184 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:12:44 crc kubenswrapper[4765]: I0121 13:12:44.446552 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:12:44 crc kubenswrapper[4765]: I0121 13:12:44.447103 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:12:44 crc kubenswrapper[4765]: I0121 13:12:44.447154 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:12:44 crc kubenswrapper[4765]: I0121 13:12:44.447716 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a82e379790a817fc46a03244d0ba93ae43907a95b2b85581d0d985030ba55af"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:12:44 crc kubenswrapper[4765]: I0121 13:12:44.447777 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://2a82e379790a817fc46a03244d0ba93ae43907a95b2b85581d0d985030ba55af" gracePeriod=600 Jan 21 13:12:45 crc kubenswrapper[4765]: I0121 13:12:45.332405 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="2a82e379790a817fc46a03244d0ba93ae43907a95b2b85581d0d985030ba55af" exitCode=0 Jan 21 13:12:45 crc kubenswrapper[4765]: I0121 13:12:45.332469 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"2a82e379790a817fc46a03244d0ba93ae43907a95b2b85581d0d985030ba55af"} Jan 21 13:12:45 crc kubenswrapper[4765]: I0121 13:12:45.332973 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"f52e9baa6469e50f020fdb819604c74920e3021231bce0736ea82e11d2f65248"} Jan 21 13:12:45 crc kubenswrapper[4765]: I0121 13:12:45.333002 4765 scope.go:117] "RemoveContainer" containerID="f7a5ac8d24692585ce478eff1513b2ab0b0e70857dfc544d9cfa881f0e004073" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.439786 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7gnzb"] Jan 21 13:14:20 crc kubenswrapper[4765]: E0121 13:14:20.440815 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d4723c5-1628-4481-83b8-498fd4e5362e" containerName="registry" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.440832 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4723c5-1628-4481-83b8-498fd4e5362e" containerName="registry" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.440937 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d4723c5-1628-4481-83b8-498fd4e5362e" containerName="registry" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.441439 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gnzb" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.447321 4765 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-dgwh7" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.448271 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.448329 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.460678 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7gnzb"] Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.468719 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k79w\" (UniqueName: \"kubernetes.io/projected/861d65e3-bec0-4a97-9ef1-2ff8d0c660fe-kube-api-access-8k79w\") pod \"cert-manager-cainjector-cf98fcc89-7gnzb\" (UID: \"861d65e3-bec0-4a97-9ef1-2ff8d0c660fe\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gnzb" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.477482 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-cssjm"] Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.478493 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-cssjm" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.487040 4765 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-68nsz" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.500371 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gznfw"] Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.501385 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-gznfw" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.503721 4765 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-tn9b6" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.507865 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-cssjm"] Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.523648 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gznfw"] Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.570538 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpsbv\" (UniqueName: \"kubernetes.io/projected/34bef5eb-722e-4dd8-b19a-ae2ec67a4c93-kube-api-access-mpsbv\") pod \"cert-manager-webhook-687f57d79b-gznfw\" (UID: \"34bef5eb-722e-4dd8-b19a-ae2ec67a4c93\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gznfw" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.570660 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k79w\" (UniqueName: \"kubernetes.io/projected/861d65e3-bec0-4a97-9ef1-2ff8d0c660fe-kube-api-access-8k79w\") pod \"cert-manager-cainjector-cf98fcc89-7gnzb\" (UID: \"861d65e3-bec0-4a97-9ef1-2ff8d0c660fe\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gnzb" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.570692 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4dwl\" (UniqueName: \"kubernetes.io/projected/30c79cf6-f62c-498b-8c0b-184d3eec661f-kube-api-access-x4dwl\") pod \"cert-manager-858654f9db-cssjm\" (UID: \"30c79cf6-f62c-498b-8c0b-184d3eec661f\") " pod="cert-manager/cert-manager-858654f9db-cssjm" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.613867 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k79w\" (UniqueName: \"kubernetes.io/projected/861d65e3-bec0-4a97-9ef1-2ff8d0c660fe-kube-api-access-8k79w\") pod \"cert-manager-cainjector-cf98fcc89-7gnzb\" (UID: \"861d65e3-bec0-4a97-9ef1-2ff8d0c660fe\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gnzb" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.672073 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4dwl\" (UniqueName: \"kubernetes.io/projected/30c79cf6-f62c-498b-8c0b-184d3eec661f-kube-api-access-x4dwl\") pod \"cert-manager-858654f9db-cssjm\" (UID: \"30c79cf6-f62c-498b-8c0b-184d3eec661f\") " pod="cert-manager/cert-manager-858654f9db-cssjm" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.672196 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpsbv\" (UniqueName: \"kubernetes.io/projected/34bef5eb-722e-4dd8-b19a-ae2ec67a4c93-kube-api-access-mpsbv\") pod \"cert-manager-webhook-687f57d79b-gznfw\" (UID: \"34bef5eb-722e-4dd8-b19a-ae2ec67a4c93\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gznfw" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.691493 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4dwl\" (UniqueName: \"kubernetes.io/projected/30c79cf6-f62c-498b-8c0b-184d3eec661f-kube-api-access-x4dwl\") pod \"cert-manager-858654f9db-cssjm\" (UID: \"30c79cf6-f62c-498b-8c0b-184d3eec661f\") " pod="cert-manager/cert-manager-858654f9db-cssjm" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.691606 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpsbv\" (UniqueName: \"kubernetes.io/projected/34bef5eb-722e-4dd8-b19a-ae2ec67a4c93-kube-api-access-mpsbv\") pod \"cert-manager-webhook-687f57d79b-gznfw\" (UID: \"34bef5eb-722e-4dd8-b19a-ae2ec67a4c93\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gznfw" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.756945 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gnzb" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.794993 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-cssjm" Jan 21 13:14:20 crc kubenswrapper[4765]: I0121 13:14:20.820272 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-gznfw" Jan 21 13:14:21 crc kubenswrapper[4765]: I0121 13:14:21.057278 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7gnzb"] Jan 21 13:14:21 crc kubenswrapper[4765]: I0121 13:14:21.068550 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:14:21 crc kubenswrapper[4765]: I0121 13:14:21.119230 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-cssjm"] Jan 21 13:14:21 crc kubenswrapper[4765]: I0121 13:14:21.168720 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gznfw"] Jan 21 13:14:21 crc kubenswrapper[4765]: W0121 13:14:21.173389 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34bef5eb_722e_4dd8_b19a_ae2ec67a4c93.slice/crio-c569cac8ad29cdd1fcfdad60e912e6691321d5e0be96db135a21c4c65f7a5b27 WatchSource:0}: Error finding container c569cac8ad29cdd1fcfdad60e912e6691321d5e0be96db135a21c4c65f7a5b27: Status 404 returned error can't find the container with id c569cac8ad29cdd1fcfdad60e912e6691321d5e0be96db135a21c4c65f7a5b27 Jan 21 13:14:21 crc kubenswrapper[4765]: I0121 13:14:21.896123 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-gznfw" event={"ID":"34bef5eb-722e-4dd8-b19a-ae2ec67a4c93","Type":"ContainerStarted","Data":"c569cac8ad29cdd1fcfdad60e912e6691321d5e0be96db135a21c4c65f7a5b27"} Jan 21 13:14:21 crc kubenswrapper[4765]: I0121 13:14:21.897610 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gnzb" event={"ID":"861d65e3-bec0-4a97-9ef1-2ff8d0c660fe","Type":"ContainerStarted","Data":"a02b0e98993da3d5c35904199bb24f6f30198d13e7a8a1eb80d1874d0ccaca2a"} Jan 21 13:14:21 crc kubenswrapper[4765]: I0121 13:14:21.898534 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-cssjm" event={"ID":"30c79cf6-f62c-498b-8c0b-184d3eec661f","Type":"ContainerStarted","Data":"5a63db2c7bf05e78dfaccb934b4cee05fb3308e89ad3a20e3df87c8b47a08b15"} Jan 21 13:14:28 crc kubenswrapper[4765]: I0121 13:14:28.983284 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x677d"] Jan 21 13:14:28 crc kubenswrapper[4765]: I0121 13:14:28.984524 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovn-controller" containerID="cri-o://9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083" gracePeriod=30 Jan 21 13:14:28 crc kubenswrapper[4765]: I0121 13:14:28.985256 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="sbdb" containerID="cri-o://244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8" gracePeriod=30 Jan 21 13:14:28 crc kubenswrapper[4765]: I0121 13:14:28.985325 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="nbdb" containerID="cri-o://2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d" gracePeriod=30 Jan 21 13:14:28 crc kubenswrapper[4765]: I0121 13:14:28.985380 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovn-acl-logging" containerID="cri-o://b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741" gracePeriod=30 Jan 21 13:14:28 crc kubenswrapper[4765]: I0121 13:14:28.985489 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="northd" containerID="cri-o://040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c" gracePeriod=30 Jan 21 13:14:28 crc kubenswrapper[4765]: I0121 13:14:28.985612 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22" gracePeriod=30 Jan 21 13:14:28 crc kubenswrapper[4765]: I0121 13:14:28.985426 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="kube-rbac-proxy-node" containerID="cri-o://bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e" gracePeriod=30 Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.035248 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" containerID="cri-o://73e26c6ffdf8a354d3a45016f806d59bea6134a67cd8caa6a234ab33001ac041" gracePeriod=30 Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.955396 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bplfq_d9b9a5be-6b15-46d2-8715-506efdae8ae7/kube-multus/2.log" Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.955851 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bplfq_d9b9a5be-6b15-46d2-8715-506efdae8ae7/kube-multus/1.log" Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.955902 4765 generic.go:334] "Generic (PLEG): container finished" podID="d9b9a5be-6b15-46d2-8715-506efdae8ae7" containerID="1ae915ebd49fe934c46ddf83c4203b9e4892daa00e041b4eb261c093882f696f" exitCode=2 Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.955975 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bplfq" event={"ID":"d9b9a5be-6b15-46d2-8715-506efdae8ae7","Type":"ContainerDied","Data":"1ae915ebd49fe934c46ddf83c4203b9e4892daa00e041b4eb261c093882f696f"} Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.956021 4765 scope.go:117] "RemoveContainer" containerID="79123ef5ce55b0a6e560030a8178ca3e5f52456eca3c33dc0598e5612c71fa3f" Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.956651 4765 scope.go:117] "RemoveContainer" containerID="1ae915ebd49fe934c46ddf83c4203b9e4892daa00e041b4eb261c093882f696f" Jan 21 13:14:29 crc kubenswrapper[4765]: E0121 13:14:29.957001 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-bplfq_openshift-multus(d9b9a5be-6b15-46d2-8715-506efdae8ae7)\"" pod="openshift-multus/multus-bplfq" podUID="d9b9a5be-6b15-46d2-8715-506efdae8ae7" Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.958840 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/3.log" Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.963634 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovn-acl-logging/0.log" Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.964715 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovn-controller/0.log" Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.965182 4765 generic.go:334] "Generic (PLEG): container finished" podID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerID="73e26c6ffdf8a354d3a45016f806d59bea6134a67cd8caa6a234ab33001ac041" exitCode=0 Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.965288 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerDied","Data":"73e26c6ffdf8a354d3a45016f806d59bea6134a67cd8caa6a234ab33001ac041"} Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.965373 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerDied","Data":"244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8"} Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.965321 4765 generic.go:334] "Generic (PLEG): container finished" podID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerID="244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8" exitCode=0 Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.965575 4765 generic.go:334] "Generic (PLEG): container finished" podID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerID="2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d" exitCode=0 Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.965646 4765 generic.go:334] "Generic (PLEG): container finished" podID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerID="040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c" exitCode=0 Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.965716 4765 generic.go:334] "Generic (PLEG): container finished" podID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerID="7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22" exitCode=0 Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.965785 4765 generic.go:334] "Generic (PLEG): container finished" podID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerID="bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e" exitCode=0 Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.965900 4765 generic.go:334] "Generic (PLEG): container finished" podID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerID="b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741" exitCode=143 Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.965660 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerDied","Data":"2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d"} Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.965999 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerDied","Data":"040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c"} Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.966019 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerDied","Data":"7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22"} Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.966033 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerDied","Data":"bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e"} Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.966045 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerDied","Data":"b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741"} Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.966056 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerDied","Data":"9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083"} Jan 21 13:14:29 crc kubenswrapper[4765]: I0121 13:14:29.965975 4765 generic.go:334] "Generic (PLEG): container finished" podID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerID="9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083" exitCode=143 Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.098911 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovnkube-controller/3.log" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.099492 4765 scope.go:117] "RemoveContainer" containerID="de49c802f91ed570b843dd8ca4ae6d4d198043461ef29509f6ae58e5cc55250a" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.102183 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovn-acl-logging/0.log" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.102982 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovn-controller/0.log" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.103644 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.169553 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sjk68"] Jan 21 13:14:30 crc kubenswrapper[4765]: E0121 13:14:30.169819 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.169833 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 13:14:30 crc kubenswrapper[4765]: E0121 13:14:30.169848 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.169854 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: E0121 13:14:30.169866 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovn-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.169873 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovn-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: E0121 13:14:30.169882 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="nbdb" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.169888 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="nbdb" Jan 21 13:14:30 crc kubenswrapper[4765]: E0121 13:14:30.169898 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.169905 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: E0121 13:14:30.169912 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="kube-rbac-proxy-node" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.169919 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="kube-rbac-proxy-node" Jan 21 13:14:30 crc kubenswrapper[4765]: E0121 13:14:30.169927 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.169932 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: E0121 13:14:30.169941 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="kubecfg-setup" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.169946 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="kubecfg-setup" Jan 21 13:14:30 crc kubenswrapper[4765]: E0121 13:14:30.169955 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="northd" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.169961 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="northd" Jan 21 13:14:30 crc kubenswrapper[4765]: E0121 13:14:30.169987 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovn-acl-logging" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.169993 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovn-acl-logging" Jan 21 13:14:30 crc kubenswrapper[4765]: E0121 13:14:30.170003 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="sbdb" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.170008 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="sbdb" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.170098 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="northd" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.170111 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.170118 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="nbdb" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.170126 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovn-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.170134 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="sbdb" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.170142 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.170149 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.170157 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="kube-rbac-proxy-node" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.170166 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.170179 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovn-acl-logging" Jan 21 13:14:30 crc kubenswrapper[4765]: E0121 13:14:30.170294 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.170302 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: E0121 13:14:30.170311 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.170317 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.170421 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.170428 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" containerName="ovnkube-controller" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.180397 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259410 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd80c14d-ebec-4d65-8116-149400d6f8be-ovn-node-metrics-cert\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259479 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-openvswitch\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259515 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-log-socket\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259560 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-node-log\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259584 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-slash\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259613 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-config\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259651 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-run-ovn-kubernetes\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259687 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-systemd-units\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259725 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-etc-openvswitch\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259743 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-systemd\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259767 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-run-netns\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259805 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-var-lib-openvswitch\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259860 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-ovn\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259886 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-cni-bin\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259909 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259937 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-env-overrides\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.259975 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-script-lib\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.260011 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-kubelet\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.260039 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9t46\" (UniqueName: \"kubernetes.io/projected/cd80c14d-ebec-4d65-8116-149400d6f8be-kube-api-access-q9t46\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.260069 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-cni-netd\") pod \"cd80c14d-ebec-4d65-8116-149400d6f8be\" (UID: \"cd80c14d-ebec-4d65-8116-149400d6f8be\") " Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.260065 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.260135 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.260163 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-log-socket" (OuterVolumeSpecName: "log-socket") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.260154 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.260200 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-node-log" (OuterVolumeSpecName: "node-log") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.260769 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.260793 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.260809 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-slash" (OuterVolumeSpecName: "host-slash") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.260826 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.260842 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.260864 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261016 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261047 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261153 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261184 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261277 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261336 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261308 4765 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261380 4765 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261394 4765 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261403 4765 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261412 4765 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261423 4765 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261431 4765 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261439 4765 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261448 4765 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261460 4765 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261494 4765 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261504 4765 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261513 4765 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd80c14d-ebec-4d65-8116-149400d6f8be-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261522 4765 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.261530 4765 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.265930 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd80c14d-ebec-4d65-8116-149400d6f8be-kube-api-access-q9t46" (OuterVolumeSpecName: "kube-api-access-q9t46") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "kube-api-access-q9t46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.266038 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd80c14d-ebec-4d65-8116-149400d6f8be-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.276682 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cd80c14d-ebec-4d65-8116-149400d6f8be" (UID: "cd80c14d-ebec-4d65-8116-149400d6f8be"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.362788 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-cni-netd\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.362854 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.362922 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60f3537f-c606-4f26-9671-380063ef129f-ovnkube-config\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.362963 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-slash\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.362993 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-run-ovn-kubernetes\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363020 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/60f3537f-c606-4f26-9671-380063ef129f-ovnkube-script-lib\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363044 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-systemd-units\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363085 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-kubelet\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363113 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-cni-bin\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363140 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-run-ovn\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363167 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-etc-openvswitch\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363189 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-run-openvswitch\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363252 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-log-socket\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363280 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60f3537f-c606-4f26-9671-380063ef129f-env-overrides\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363306 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60f3537f-c606-4f26-9671-380063ef129f-ovn-node-metrics-cert\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363330 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-run-systemd\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363353 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-node-log\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363377 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-run-netns\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363400 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-var-lib-openvswitch\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363423 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxn8d\" (UniqueName: \"kubernetes.io/projected/60f3537f-c606-4f26-9671-380063ef129f-kube-api-access-qxn8d\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363469 4765 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363484 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9t46\" (UniqueName: \"kubernetes.io/projected/cd80c14d-ebec-4d65-8116-149400d6f8be-kube-api-access-q9t46\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363496 4765 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363508 4765 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd80c14d-ebec-4d65-8116-149400d6f8be-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.363520 4765 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd80c14d-ebec-4d65-8116-149400d6f8be-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464375 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-cni-netd\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464449 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464456 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-cni-netd\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464476 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60f3537f-c606-4f26-9671-380063ef129f-ovnkube-config\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464521 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-slash\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464552 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-run-ovn-kubernetes\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464581 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/60f3537f-c606-4f26-9671-380063ef129f-ovnkube-script-lib\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464606 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-systemd-units\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464644 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-kubelet\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464669 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-cni-bin\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464693 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-run-ovn\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464718 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-etc-openvswitch\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464740 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-run-openvswitch\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464759 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-log-socket\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464784 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60f3537f-c606-4f26-9671-380063ef129f-env-overrides\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464809 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60f3537f-c606-4f26-9671-380063ef129f-ovn-node-metrics-cert\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464834 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-run-systemd\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464857 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-run-netns\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464879 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-node-log\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464901 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-var-lib-openvswitch\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.464926 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxn8d\" (UniqueName: \"kubernetes.io/projected/60f3537f-c606-4f26-9671-380063ef129f-kube-api-access-qxn8d\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.465315 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60f3537f-c606-4f26-9671-380063ef129f-ovnkube-config\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.465374 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.465414 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-run-openvswitch\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.465414 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-slash\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.465441 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-log-socket\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.465470 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-run-ovn-kubernetes\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.465939 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60f3537f-c606-4f26-9671-380063ef129f-env-overrides\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.466016 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/60f3537f-c606-4f26-9671-380063ef129f-ovnkube-script-lib\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.466061 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-systemd-units\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.466094 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-kubelet\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.466125 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-cni-bin\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.466155 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-run-ovn\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.466185 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-etc-openvswitch\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.466231 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-host-run-netns\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.466260 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-run-systemd\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.466285 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-node-log\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.466315 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60f3537f-c606-4f26-9671-380063ef129f-var-lib-openvswitch\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.469952 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60f3537f-c606-4f26-9671-380063ef129f-ovn-node-metrics-cert\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.482948 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxn8d\" (UniqueName: \"kubernetes.io/projected/60f3537f-c606-4f26-9671-380063ef129f-kube-api-access-qxn8d\") pod \"ovnkube-node-sjk68\" (UID: \"60f3537f-c606-4f26-9671-380063ef129f\") " pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.510433 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.975144 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" event={"ID":"60f3537f-c606-4f26-9671-380063ef129f","Type":"ContainerStarted","Data":"d0b51c992da4871ffdf5fe53160b838d392717d155fa95eb33387a6ca4079fa1"} Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.975543 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" event={"ID":"60f3537f-c606-4f26-9671-380063ef129f","Type":"ContainerStarted","Data":"0be0b44fbf62e497b794c30a1d674bce641bf80f287872ebf313f2dd286859fb"} Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.977623 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bplfq_d9b9a5be-6b15-46d2-8715-506efdae8ae7/kube-multus/2.log" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.983931 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovn-acl-logging/0.log" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.985514 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x677d_cd80c14d-ebec-4d65-8116-149400d6f8be/ovn-controller/0.log" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.985966 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" event={"ID":"cd80c14d-ebec-4d65-8116-149400d6f8be","Type":"ContainerDied","Data":"96dc92bd7b3deceb8264fd2e0ed1448add4eb9487a80efec115495823ae95818"} Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.986020 4765 scope.go:117] "RemoveContainer" containerID="73e26c6ffdf8a354d3a45016f806d59bea6134a67cd8caa6a234ab33001ac041" Jan 21 13:14:30 crc kubenswrapper[4765]: I0121 13:14:30.986075 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x677d" Jan 21 13:14:31 crc kubenswrapper[4765]: I0121 13:14:31.050082 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x677d"] Jan 21 13:14:31 crc kubenswrapper[4765]: I0121 13:14:31.050246 4765 scope.go:117] "RemoveContainer" containerID="244440e546156224e989cb09dda090df8132846b38fd068951a196131974efd8" Jan 21 13:14:31 crc kubenswrapper[4765]: I0121 13:14:31.056487 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x677d"] Jan 21 13:14:31 crc kubenswrapper[4765]: I0121 13:14:31.065059 4765 scope.go:117] "RemoveContainer" containerID="2f7fb17bce018750a86c74a69bd8f49e9167ec6d95676b47d40d24296106fa1d" Jan 21 13:14:31 crc kubenswrapper[4765]: I0121 13:14:31.080127 4765 scope.go:117] "RemoveContainer" containerID="040871ade3489838fdb1ee0cdac1d83042e5335522899c61880e97507b21169c" Jan 21 13:14:31 crc kubenswrapper[4765]: I0121 13:14:31.096427 4765 scope.go:117] "RemoveContainer" containerID="7ad2ad0b4c0da082e6c3bd2dda41339c74c8fe491058d93c4b39d468db6fbd22" Jan 21 13:14:31 crc kubenswrapper[4765]: I0121 13:14:31.111133 4765 scope.go:117] "RemoveContainer" containerID="bded32b7d69d540e3c2730c46a9995c5ed31e6cfeb54af0b2b355407a17c781e" Jan 21 13:14:31 crc kubenswrapper[4765]: I0121 13:14:31.124082 4765 scope.go:117] "RemoveContainer" containerID="b3b11e0490a0b33d72f7d408759a9be1eef6ec9cae363e274b961e2bed611741" Jan 21 13:14:31 crc kubenswrapper[4765]: I0121 13:14:31.137926 4765 scope.go:117] "RemoveContainer" containerID="9a23f4076ec779b44ced934e578d677b1a4cfa664dddb2376acfab3ffcdbb083" Jan 21 13:14:31 crc kubenswrapper[4765]: I0121 13:14:31.156175 4765 scope.go:117] "RemoveContainer" containerID="5bde2cc57b09064420c99156e5724f87e3c70d760c661a73f3772d0415be6154" Jan 21 13:14:31 crc kubenswrapper[4765]: I0121 13:14:31.620245 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd80c14d-ebec-4d65-8116-149400d6f8be" path="/var/lib/kubelet/pods/cd80c14d-ebec-4d65-8116-149400d6f8be/volumes" Jan 21 13:14:31 crc kubenswrapper[4765]: I0121 13:14:31.992845 4765 generic.go:334] "Generic (PLEG): container finished" podID="60f3537f-c606-4f26-9671-380063ef129f" containerID="d0b51c992da4871ffdf5fe53160b838d392717d155fa95eb33387a6ca4079fa1" exitCode=0 Jan 21 13:14:31 crc kubenswrapper[4765]: I0121 13:14:31.992949 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" event={"ID":"60f3537f-c606-4f26-9671-380063ef129f","Type":"ContainerDied","Data":"d0b51c992da4871ffdf5fe53160b838d392717d155fa95eb33387a6ca4079fa1"} Jan 21 13:14:36 crc kubenswrapper[4765]: I0121 13:14:36.020336 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" event={"ID":"60f3537f-c606-4f26-9671-380063ef129f","Type":"ContainerStarted","Data":"c5741bf9c932164ec41b8a4b9b4887737d6c60b2866cd0fdbb1883817859b5a6"} Jan 21 13:14:40 crc kubenswrapper[4765]: I0121 13:14:40.045633 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" event={"ID":"60f3537f-c606-4f26-9671-380063ef129f","Type":"ContainerStarted","Data":"3efe453b0583d21128d2aea2bc1dbd05ca82a8b85eb29201819fd2eb493a7a62"} Jan 21 13:14:40 crc kubenswrapper[4765]: I0121 13:14:40.046647 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" event={"ID":"60f3537f-c606-4f26-9671-380063ef129f","Type":"ContainerStarted","Data":"3b5aa1b95925f9cd18de1a21b2ab670003df4a162f54fd40b457c4d85030b2c9"} Jan 21 13:14:40 crc kubenswrapper[4765]: I0121 13:14:40.046670 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" event={"ID":"60f3537f-c606-4f26-9671-380063ef129f","Type":"ContainerStarted","Data":"fc46b65223d13c8583120e94e65a7715b3f5893575220cb01bee1c3fecdb48ce"} Jan 21 13:14:40 crc kubenswrapper[4765]: I0121 13:14:40.047813 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gnzb" event={"ID":"861d65e3-bec0-4a97-9ef1-2ff8d0c660fe","Type":"ContainerStarted","Data":"bec9c49c0f355927d539a930c755578cdfec2618f75e5a1395b3925244b314c1"} Jan 21 13:14:42 crc kubenswrapper[4765]: I0121 13:14:42.059825 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-cssjm" event={"ID":"30c79cf6-f62c-498b-8c0b-184d3eec661f","Type":"ContainerStarted","Data":"6489db95bb49a575087b26603a7dee09bfd21105e2fb08bc6bf39bd4c5142a30"} Jan 21 13:14:42 crc kubenswrapper[4765]: I0121 13:14:42.066270 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-gznfw" event={"ID":"34bef5eb-722e-4dd8-b19a-ae2ec67a4c93","Type":"ContainerStarted","Data":"f00d2c16f8c5ee449c4e2099f6abae8a13bc5edc357028b7bb5b0ac33a13d8ff"} Jan 21 13:14:42 crc kubenswrapper[4765]: I0121 13:14:42.066477 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-gznfw" Jan 21 13:14:42 crc kubenswrapper[4765]: I0121 13:14:42.070533 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" event={"ID":"60f3537f-c606-4f26-9671-380063ef129f","Type":"ContainerStarted","Data":"bb922d3ac0d2cf895c0dd12b0b6a6bf3d95c2942f8bf1b4036c80389fe618555"} Jan 21 13:14:42 crc kubenswrapper[4765]: I0121 13:14:42.076064 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-cssjm" podStartSLOduration=2.241067143 podStartE2EDuration="22.076040903s" podCreationTimestamp="2026-01-21 13:14:20 +0000 UTC" firstStartedPulling="2026-01-21 13:14:21.133446958 +0000 UTC m=+722.151172780" lastFinishedPulling="2026-01-21 13:14:40.968420718 +0000 UTC m=+741.986146540" observedRunningTime="2026-01-21 13:14:42.075984622 +0000 UTC m=+743.093710444" watchObservedRunningTime="2026-01-21 13:14:42.076040903 +0000 UTC m=+743.093766725" Jan 21 13:14:42 crc kubenswrapper[4765]: I0121 13:14:42.078850 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gnzb" podStartSLOduration=3.9622959939999998 podStartE2EDuration="22.077004101s" podCreationTimestamp="2026-01-21 13:14:20 +0000 UTC" firstStartedPulling="2026-01-21 13:14:21.068339636 +0000 UTC m=+722.086065458" lastFinishedPulling="2026-01-21 13:14:39.183047743 +0000 UTC m=+740.200773565" observedRunningTime="2026-01-21 13:14:40.068174933 +0000 UTC m=+741.085900775" watchObservedRunningTime="2026-01-21 13:14:42.077004101 +0000 UTC m=+743.094729923" Jan 21 13:14:42 crc kubenswrapper[4765]: I0121 13:14:42.101748 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-gznfw" podStartSLOduration=2.042289995 podStartE2EDuration="22.101720797s" podCreationTimestamp="2026-01-21 13:14:20 +0000 UTC" firstStartedPulling="2026-01-21 13:14:21.175694868 +0000 UTC m=+722.193420690" lastFinishedPulling="2026-01-21 13:14:41.23512567 +0000 UTC m=+742.252851492" observedRunningTime="2026-01-21 13:14:42.098602296 +0000 UTC m=+743.116328118" watchObservedRunningTime="2026-01-21 13:14:42.101720797 +0000 UTC m=+743.119446619" Jan 21 13:14:43 crc kubenswrapper[4765]: I0121 13:14:43.096204 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" event={"ID":"60f3537f-c606-4f26-9671-380063ef129f","Type":"ContainerStarted","Data":"04a737e1b680453757c13c5fa3355aecd72803c9d5daf8ac09972a0753225362"} Jan 21 13:14:44 crc kubenswrapper[4765]: I0121 13:14:44.446564 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:14:44 crc kubenswrapper[4765]: I0121 13:14:44.447447 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:14:44 crc kubenswrapper[4765]: I0121 13:14:44.614011 4765 scope.go:117] "RemoveContainer" containerID="1ae915ebd49fe934c46ddf83c4203b9e4892daa00e041b4eb261c093882f696f" Jan 21 13:14:45 crc kubenswrapper[4765]: I0121 13:14:45.112763 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" event={"ID":"60f3537f-c606-4f26-9671-380063ef129f","Type":"ContainerStarted","Data":"9be82d3696142b40343288536e5dbb5c4132e1fcae70df9b6cafd69303e09fb3"} Jan 21 13:14:45 crc kubenswrapper[4765]: I0121 13:14:45.114842 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bplfq_d9b9a5be-6b15-46d2-8715-506efdae8ae7/kube-multus/2.log" Jan 21 13:14:45 crc kubenswrapper[4765]: I0121 13:14:45.114886 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bplfq" event={"ID":"d9b9a5be-6b15-46d2-8715-506efdae8ae7","Type":"ContainerStarted","Data":"cc94e4ac3ff9f6661301cd1aa46023112c5cad7fd6024bdd47a889fd56ee24dc"} Jan 21 13:14:47 crc kubenswrapper[4765]: I0121 13:14:47.130189 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" event={"ID":"60f3537f-c606-4f26-9671-380063ef129f","Type":"ContainerStarted","Data":"4bb7206e3aa3b8e71f26078c31d77446b02c782d6b42835bab41cb48dbdbebfb"} Jan 21 13:14:47 crc kubenswrapper[4765]: I0121 13:14:47.131290 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:47 crc kubenswrapper[4765]: I0121 13:14:47.131306 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:47 crc kubenswrapper[4765]: I0121 13:14:47.131315 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:47 crc kubenswrapper[4765]: I0121 13:14:47.159024 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:47 crc kubenswrapper[4765]: I0121 13:14:47.159851 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:14:47 crc kubenswrapper[4765]: I0121 13:14:47.166945 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" podStartSLOduration=17.166925243 podStartE2EDuration="17.166925243s" podCreationTimestamp="2026-01-21 13:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:14:47.16170906 +0000 UTC m=+748.179434882" watchObservedRunningTime="2026-01-21 13:14:47.166925243 +0000 UTC m=+748.184651065" Jan 21 13:14:50 crc kubenswrapper[4765]: I0121 13:14:50.823509 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-gznfw" Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.168721 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9"] Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.170392 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.173022 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.173867 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.179296 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9"] Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.246129 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e54c9740-b071-4064-873a-acf56bc89533-config-volume\") pod \"collect-profiles-29483355-nk7x9\" (UID: \"e54c9740-b071-4064-873a-acf56bc89533\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.246202 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e54c9740-b071-4064-873a-acf56bc89533-secret-volume\") pod \"collect-profiles-29483355-nk7x9\" (UID: \"e54c9740-b071-4064-873a-acf56bc89533\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.246283 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6dxp\" (UniqueName: \"kubernetes.io/projected/e54c9740-b071-4064-873a-acf56bc89533-kube-api-access-t6dxp\") pod \"collect-profiles-29483355-nk7x9\" (UID: \"e54c9740-b071-4064-873a-acf56bc89533\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.348601 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e54c9740-b071-4064-873a-acf56bc89533-config-volume\") pod \"collect-profiles-29483355-nk7x9\" (UID: \"e54c9740-b071-4064-873a-acf56bc89533\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.348691 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e54c9740-b071-4064-873a-acf56bc89533-secret-volume\") pod \"collect-profiles-29483355-nk7x9\" (UID: \"e54c9740-b071-4064-873a-acf56bc89533\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.348755 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6dxp\" (UniqueName: \"kubernetes.io/projected/e54c9740-b071-4064-873a-acf56bc89533-kube-api-access-t6dxp\") pod \"collect-profiles-29483355-nk7x9\" (UID: \"e54c9740-b071-4064-873a-acf56bc89533\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.350725 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e54c9740-b071-4064-873a-acf56bc89533-config-volume\") pod \"collect-profiles-29483355-nk7x9\" (UID: \"e54c9740-b071-4064-873a-acf56bc89533\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.357619 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e54c9740-b071-4064-873a-acf56bc89533-secret-volume\") pod \"collect-profiles-29483355-nk7x9\" (UID: \"e54c9740-b071-4064-873a-acf56bc89533\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.370238 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6dxp\" (UniqueName: \"kubernetes.io/projected/e54c9740-b071-4064-873a-acf56bc89533-kube-api-access-t6dxp\") pod \"collect-profiles-29483355-nk7x9\" (UID: \"e54c9740-b071-4064-873a-acf56bc89533\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.489770 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.541473 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sjk68" Jan 21 13:15:00 crc kubenswrapper[4765]: I0121 13:15:00.757575 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9"] Jan 21 13:15:01 crc kubenswrapper[4765]: I0121 13:15:01.215842 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" event={"ID":"e54c9740-b071-4064-873a-acf56bc89533","Type":"ContainerStarted","Data":"5e8896dac1c6e57ce3c342e6559e81f70b2d1d3317d34e381688ab96c741c549"} Jan 21 13:15:02 crc kubenswrapper[4765]: I0121 13:15:02.222325 4765 generic.go:334] "Generic (PLEG): container finished" podID="e54c9740-b071-4064-873a-acf56bc89533" containerID="2b78061ca4f0f9dd357a1044a304c871c5d75b92a1cdb2649d1a6dd6d6addf60" exitCode=0 Jan 21 13:15:02 crc kubenswrapper[4765]: I0121 13:15:02.222380 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" event={"ID":"e54c9740-b071-4064-873a-acf56bc89533","Type":"ContainerDied","Data":"2b78061ca4f0f9dd357a1044a304c871c5d75b92a1cdb2649d1a6dd6d6addf60"} Jan 21 13:15:03 crc kubenswrapper[4765]: I0121 13:15:03.456645 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" Jan 21 13:15:03 crc kubenswrapper[4765]: I0121 13:15:03.493299 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6dxp\" (UniqueName: \"kubernetes.io/projected/e54c9740-b071-4064-873a-acf56bc89533-kube-api-access-t6dxp\") pod \"e54c9740-b071-4064-873a-acf56bc89533\" (UID: \"e54c9740-b071-4064-873a-acf56bc89533\") " Jan 21 13:15:03 crc kubenswrapper[4765]: I0121 13:15:03.493397 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e54c9740-b071-4064-873a-acf56bc89533-config-volume\") pod \"e54c9740-b071-4064-873a-acf56bc89533\" (UID: \"e54c9740-b071-4064-873a-acf56bc89533\") " Jan 21 13:15:03 crc kubenswrapper[4765]: I0121 13:15:03.493511 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e54c9740-b071-4064-873a-acf56bc89533-secret-volume\") pod \"e54c9740-b071-4064-873a-acf56bc89533\" (UID: \"e54c9740-b071-4064-873a-acf56bc89533\") " Jan 21 13:15:03 crc kubenswrapper[4765]: I0121 13:15:03.494530 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54c9740-b071-4064-873a-acf56bc89533-config-volume" (OuterVolumeSpecName: "config-volume") pod "e54c9740-b071-4064-873a-acf56bc89533" (UID: "e54c9740-b071-4064-873a-acf56bc89533"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:15:03 crc kubenswrapper[4765]: I0121 13:15:03.500637 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e54c9740-b071-4064-873a-acf56bc89533-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e54c9740-b071-4064-873a-acf56bc89533" (UID: "e54c9740-b071-4064-873a-acf56bc89533"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:15:03 crc kubenswrapper[4765]: I0121 13:15:03.502052 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54c9740-b071-4064-873a-acf56bc89533-kube-api-access-t6dxp" (OuterVolumeSpecName: "kube-api-access-t6dxp") pod "e54c9740-b071-4064-873a-acf56bc89533" (UID: "e54c9740-b071-4064-873a-acf56bc89533"). InnerVolumeSpecName "kube-api-access-t6dxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:15:03 crc kubenswrapper[4765]: I0121 13:15:03.595608 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6dxp\" (UniqueName: \"kubernetes.io/projected/e54c9740-b071-4064-873a-acf56bc89533-kube-api-access-t6dxp\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:03 crc kubenswrapper[4765]: I0121 13:15:03.595663 4765 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e54c9740-b071-4064-873a-acf56bc89533-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:03 crc kubenswrapper[4765]: I0121 13:15:03.595673 4765 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e54c9740-b071-4064-873a-acf56bc89533-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:04 crc kubenswrapper[4765]: I0121 13:15:04.238918 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" event={"ID":"e54c9740-b071-4064-873a-acf56bc89533","Type":"ContainerDied","Data":"5e8896dac1c6e57ce3c342e6559e81f70b2d1d3317d34e381688ab96c741c549"} Jan 21 13:15:04 crc kubenswrapper[4765]: I0121 13:15:04.239282 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e8896dac1c6e57ce3c342e6559e81f70b2d1d3317d34e381688ab96c741c549" Jan 21 13:15:04 crc kubenswrapper[4765]: I0121 13:15:04.239421 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9" Jan 21 13:15:14 crc kubenswrapper[4765]: I0121 13:15:14.446331 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:15:14 crc kubenswrapper[4765]: I0121 13:15:14.447382 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:15:15 crc kubenswrapper[4765]: I0121 13:15:15.513866 4765 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.366252 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22"] Jan 21 13:15:36 crc kubenswrapper[4765]: E0121 13:15:36.367194 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54c9740-b071-4064-873a-acf56bc89533" containerName="collect-profiles" Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.367235 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54c9740-b071-4064-873a-acf56bc89533" containerName="collect-profiles" Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.367368 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54c9740-b071-4064-873a-acf56bc89533" containerName="collect-profiles" Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.368181 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.370657 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.392923 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22"] Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.503431 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68e5ceb6-2341-4976-8588-ecdd97e94b29-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22\" (UID: \"68e5ceb6-2341-4976-8588-ecdd97e94b29\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.503498 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68e5ceb6-2341-4976-8588-ecdd97e94b29-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22\" (UID: \"68e5ceb6-2341-4976-8588-ecdd97e94b29\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.503553 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqz8v\" (UniqueName: \"kubernetes.io/projected/68e5ceb6-2341-4976-8588-ecdd97e94b29-kube-api-access-zqz8v\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22\" (UID: \"68e5ceb6-2341-4976-8588-ecdd97e94b29\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.605369 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqz8v\" (UniqueName: \"kubernetes.io/projected/68e5ceb6-2341-4976-8588-ecdd97e94b29-kube-api-access-zqz8v\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22\" (UID: \"68e5ceb6-2341-4976-8588-ecdd97e94b29\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.605476 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68e5ceb6-2341-4976-8588-ecdd97e94b29-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22\" (UID: \"68e5ceb6-2341-4976-8588-ecdd97e94b29\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.605538 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68e5ceb6-2341-4976-8588-ecdd97e94b29-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22\" (UID: \"68e5ceb6-2341-4976-8588-ecdd97e94b29\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.606628 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68e5ceb6-2341-4976-8588-ecdd97e94b29-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22\" (UID: \"68e5ceb6-2341-4976-8588-ecdd97e94b29\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.606727 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68e5ceb6-2341-4976-8588-ecdd97e94b29-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22\" (UID: \"68e5ceb6-2341-4976-8588-ecdd97e94b29\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.633688 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqz8v\" (UniqueName: \"kubernetes.io/projected/68e5ceb6-2341-4976-8588-ecdd97e94b29-kube-api-access-zqz8v\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22\" (UID: \"68e5ceb6-2341-4976-8588-ecdd97e94b29\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" Jan 21 13:15:36 crc kubenswrapper[4765]: I0121 13:15:36.685625 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" Jan 21 13:15:37 crc kubenswrapper[4765]: I0121 13:15:37.117057 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22"] Jan 21 13:15:37 crc kubenswrapper[4765]: I0121 13:15:37.443323 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" event={"ID":"68e5ceb6-2341-4976-8588-ecdd97e94b29","Type":"ContainerStarted","Data":"df7758a3bcf388d2f66eddd9842bd48838eb90044b82e8f34f0a027787e421da"} Jan 21 13:15:38 crc kubenswrapper[4765]: I0121 13:15:38.451277 4765 generic.go:334] "Generic (PLEG): container finished" podID="68e5ceb6-2341-4976-8588-ecdd97e94b29" containerID="484934bd0038342b3da96bff35d45225d6935f50945677f2e96bc3368b41b4d9" exitCode=0 Jan 21 13:15:38 crc kubenswrapper[4765]: I0121 13:15:38.451341 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" event={"ID":"68e5ceb6-2341-4976-8588-ecdd97e94b29","Type":"ContainerDied","Data":"484934bd0038342b3da96bff35d45225d6935f50945677f2e96bc3368b41b4d9"} Jan 21 13:15:38 crc kubenswrapper[4765]: I0121 13:15:38.640024 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-thmzs"] Jan 21 13:15:38 crc kubenswrapper[4765]: I0121 13:15:38.641881 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:15:38 crc kubenswrapper[4765]: I0121 13:15:38.664378 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-thmzs"] Jan 21 13:15:38 crc kubenswrapper[4765]: I0121 13:15:38.743562 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfkt\" (UniqueName: \"kubernetes.io/projected/7327f338-11e9-4d75-bcd4-aa62c4e1c830-kube-api-access-jdfkt\") pod \"redhat-operators-thmzs\" (UID: \"7327f338-11e9-4d75-bcd4-aa62c4e1c830\") " pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:15:38 crc kubenswrapper[4765]: I0121 13:15:38.743665 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7327f338-11e9-4d75-bcd4-aa62c4e1c830-utilities\") pod \"redhat-operators-thmzs\" (UID: \"7327f338-11e9-4d75-bcd4-aa62c4e1c830\") " pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:15:38 crc kubenswrapper[4765]: I0121 13:15:38.743711 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7327f338-11e9-4d75-bcd4-aa62c4e1c830-catalog-content\") pod \"redhat-operators-thmzs\" (UID: \"7327f338-11e9-4d75-bcd4-aa62c4e1c830\") " pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:15:38 crc kubenswrapper[4765]: I0121 13:15:38.845429 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdfkt\" (UniqueName: \"kubernetes.io/projected/7327f338-11e9-4d75-bcd4-aa62c4e1c830-kube-api-access-jdfkt\") pod \"redhat-operators-thmzs\" (UID: \"7327f338-11e9-4d75-bcd4-aa62c4e1c830\") " pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:15:38 crc kubenswrapper[4765]: I0121 13:15:38.845535 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7327f338-11e9-4d75-bcd4-aa62c4e1c830-utilities\") pod \"redhat-operators-thmzs\" (UID: \"7327f338-11e9-4d75-bcd4-aa62c4e1c830\") " pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:15:38 crc kubenswrapper[4765]: I0121 13:15:38.845578 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7327f338-11e9-4d75-bcd4-aa62c4e1c830-catalog-content\") pod \"redhat-operators-thmzs\" (UID: \"7327f338-11e9-4d75-bcd4-aa62c4e1c830\") " pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:15:38 crc kubenswrapper[4765]: I0121 13:15:38.846262 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7327f338-11e9-4d75-bcd4-aa62c4e1c830-utilities\") pod \"redhat-operators-thmzs\" (UID: \"7327f338-11e9-4d75-bcd4-aa62c4e1c830\") " pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:15:38 crc kubenswrapper[4765]: I0121 13:15:38.846275 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7327f338-11e9-4d75-bcd4-aa62c4e1c830-catalog-content\") pod \"redhat-operators-thmzs\" (UID: \"7327f338-11e9-4d75-bcd4-aa62c4e1c830\") " pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:15:38 crc kubenswrapper[4765]: I0121 13:15:38.878460 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdfkt\" (UniqueName: \"kubernetes.io/projected/7327f338-11e9-4d75-bcd4-aa62c4e1c830-kube-api-access-jdfkt\") pod \"redhat-operators-thmzs\" (UID: \"7327f338-11e9-4d75-bcd4-aa62c4e1c830\") " pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:15:39 crc kubenswrapper[4765]: I0121 13:15:39.011603 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:15:39 crc kubenswrapper[4765]: I0121 13:15:39.253641 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-thmzs"] Jan 21 13:15:39 crc kubenswrapper[4765]: I0121 13:15:39.458677 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thmzs" event={"ID":"7327f338-11e9-4d75-bcd4-aa62c4e1c830","Type":"ContainerStarted","Data":"6a64a90bcf24479b22accbc439b0ddd4d01747f03a2d6d0769f8d98a5b2641cf"} Jan 21 13:15:40 crc kubenswrapper[4765]: I0121 13:15:40.467888 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thmzs" event={"ID":"7327f338-11e9-4d75-bcd4-aa62c4e1c830","Type":"ContainerStarted","Data":"c49b3a1afc6d3e9fee1d1b65467b87839fa39f6f51440ba272cedde76b58c271"} Jan 21 13:15:41 crc kubenswrapper[4765]: I0121 13:15:41.476842 4765 generic.go:334] "Generic (PLEG): container finished" podID="7327f338-11e9-4d75-bcd4-aa62c4e1c830" containerID="c49b3a1afc6d3e9fee1d1b65467b87839fa39f6f51440ba272cedde76b58c271" exitCode=0 Jan 21 13:15:41 crc kubenswrapper[4765]: I0121 13:15:41.476895 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thmzs" event={"ID":"7327f338-11e9-4d75-bcd4-aa62c4e1c830","Type":"ContainerDied","Data":"c49b3a1afc6d3e9fee1d1b65467b87839fa39f6f51440ba272cedde76b58c271"} Jan 21 13:15:42 crc kubenswrapper[4765]: I0121 13:15:42.507464 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" event={"ID":"68e5ceb6-2341-4976-8588-ecdd97e94b29","Type":"ContainerStarted","Data":"9acefcacb367c6998162a06edcca7463e83c1485d96c54c8d4beca365e70767e"} Jan 21 13:15:43 crc kubenswrapper[4765]: I0121 13:15:43.515407 4765 generic.go:334] "Generic (PLEG): container finished" podID="68e5ceb6-2341-4976-8588-ecdd97e94b29" containerID="9acefcacb367c6998162a06edcca7463e83c1485d96c54c8d4beca365e70767e" exitCode=0 Jan 21 13:15:43 crc kubenswrapper[4765]: I0121 13:15:43.515483 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" event={"ID":"68e5ceb6-2341-4976-8588-ecdd97e94b29","Type":"ContainerDied","Data":"9acefcacb367c6998162a06edcca7463e83c1485d96c54c8d4beca365e70767e"} Jan 21 13:15:44 crc kubenswrapper[4765]: I0121 13:15:44.446522 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:15:44 crc kubenswrapper[4765]: I0121 13:15:44.447023 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:15:44 crc kubenswrapper[4765]: I0121 13:15:44.447276 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:15:44 crc kubenswrapper[4765]: I0121 13:15:44.448385 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f52e9baa6469e50f020fdb819604c74920e3021231bce0736ea82e11d2f65248"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:15:44 crc kubenswrapper[4765]: I0121 13:15:44.448513 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://f52e9baa6469e50f020fdb819604c74920e3021231bce0736ea82e11d2f65248" gracePeriod=600 Jan 21 13:15:45 crc kubenswrapper[4765]: I0121 13:15:45.531829 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="f52e9baa6469e50f020fdb819604c74920e3021231bce0736ea82e11d2f65248" exitCode=0 Jan 21 13:15:45 crc kubenswrapper[4765]: I0121 13:15:45.531955 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"f52e9baa6469e50f020fdb819604c74920e3021231bce0736ea82e11d2f65248"} Jan 21 13:15:45 crc kubenswrapper[4765]: I0121 13:15:45.533409 4765 scope.go:117] "RemoveContainer" containerID="2a82e379790a817fc46a03244d0ba93ae43907a95b2b85581d0d985030ba55af" Jan 21 13:15:45 crc kubenswrapper[4765]: I0121 13:15:45.536843 4765 generic.go:334] "Generic (PLEG): container finished" podID="68e5ceb6-2341-4976-8588-ecdd97e94b29" containerID="d611d2090229b557351ffd02103076039f3507d777e55139105481b7d03cb120" exitCode=0 Jan 21 13:15:45 crc kubenswrapper[4765]: I0121 13:15:45.536933 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" event={"ID":"68e5ceb6-2341-4976-8588-ecdd97e94b29","Type":"ContainerDied","Data":"d611d2090229b557351ffd02103076039f3507d777e55139105481b7d03cb120"} Jan 21 13:15:46 crc kubenswrapper[4765]: I0121 13:15:46.545134 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"3163e8db45db8b9601f45b03cbef2661d131b6e749b48c66d1778284a24a76c2"} Jan 21 13:15:46 crc kubenswrapper[4765]: I0121 13:15:46.547978 4765 generic.go:334] "Generic (PLEG): container finished" podID="7327f338-11e9-4d75-bcd4-aa62c4e1c830" containerID="99959ab4cb014ca6da299fb9d55ded54a8ed89ed6b884ffd2a19ad99bf73288a" exitCode=0 Jan 21 13:15:46 crc kubenswrapper[4765]: I0121 13:15:46.548027 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thmzs" event={"ID":"7327f338-11e9-4d75-bcd4-aa62c4e1c830","Type":"ContainerDied","Data":"99959ab4cb014ca6da299fb9d55ded54a8ed89ed6b884ffd2a19ad99bf73288a"} Jan 21 13:15:46 crc kubenswrapper[4765]: I0121 13:15:46.784392 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" Jan 21 13:15:46 crc kubenswrapper[4765]: I0121 13:15:46.883041 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68e5ceb6-2341-4976-8588-ecdd97e94b29-bundle\") pod \"68e5ceb6-2341-4976-8588-ecdd97e94b29\" (UID: \"68e5ceb6-2341-4976-8588-ecdd97e94b29\") " Jan 21 13:15:46 crc kubenswrapper[4765]: I0121 13:15:46.883127 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqz8v\" (UniqueName: \"kubernetes.io/projected/68e5ceb6-2341-4976-8588-ecdd97e94b29-kube-api-access-zqz8v\") pod \"68e5ceb6-2341-4976-8588-ecdd97e94b29\" (UID: \"68e5ceb6-2341-4976-8588-ecdd97e94b29\") " Jan 21 13:15:46 crc kubenswrapper[4765]: I0121 13:15:46.883204 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68e5ceb6-2341-4976-8588-ecdd97e94b29-util\") pod \"68e5ceb6-2341-4976-8588-ecdd97e94b29\" (UID: \"68e5ceb6-2341-4976-8588-ecdd97e94b29\") " Jan 21 13:15:46 crc kubenswrapper[4765]: I0121 13:15:46.883663 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68e5ceb6-2341-4976-8588-ecdd97e94b29-bundle" (OuterVolumeSpecName: "bundle") pod "68e5ceb6-2341-4976-8588-ecdd97e94b29" (UID: "68e5ceb6-2341-4976-8588-ecdd97e94b29"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:15:46 crc kubenswrapper[4765]: I0121 13:15:46.888726 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68e5ceb6-2341-4976-8588-ecdd97e94b29-kube-api-access-zqz8v" (OuterVolumeSpecName: "kube-api-access-zqz8v") pod "68e5ceb6-2341-4976-8588-ecdd97e94b29" (UID: "68e5ceb6-2341-4976-8588-ecdd97e94b29"). InnerVolumeSpecName "kube-api-access-zqz8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:15:46 crc kubenswrapper[4765]: I0121 13:15:46.892968 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68e5ceb6-2341-4976-8588-ecdd97e94b29-util" (OuterVolumeSpecName: "util") pod "68e5ceb6-2341-4976-8588-ecdd97e94b29" (UID: "68e5ceb6-2341-4976-8588-ecdd97e94b29"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:15:46 crc kubenswrapper[4765]: I0121 13:15:46.984960 4765 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68e5ceb6-2341-4976-8588-ecdd97e94b29-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:46 crc kubenswrapper[4765]: I0121 13:15:46.985017 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqz8v\" (UniqueName: \"kubernetes.io/projected/68e5ceb6-2341-4976-8588-ecdd97e94b29-kube-api-access-zqz8v\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:46 crc kubenswrapper[4765]: I0121 13:15:46.985030 4765 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68e5ceb6-2341-4976-8588-ecdd97e94b29-util\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:47 crc kubenswrapper[4765]: I0121 13:15:47.555647 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thmzs" event={"ID":"7327f338-11e9-4d75-bcd4-aa62c4e1c830","Type":"ContainerStarted","Data":"26b806f339a978519b931c8e28e10cd4a3da66ba61fda9b120896241ecfb65fa"} Jan 21 13:15:47 crc kubenswrapper[4765]: I0121 13:15:47.559105 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" event={"ID":"68e5ceb6-2341-4976-8588-ecdd97e94b29","Type":"ContainerDied","Data":"df7758a3bcf388d2f66eddd9842bd48838eb90044b82e8f34f0a027787e421da"} Jan 21 13:15:47 crc kubenswrapper[4765]: I0121 13:15:47.559164 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df7758a3bcf388d2f66eddd9842bd48838eb90044b82e8f34f0a027787e421da" Jan 21 13:15:47 crc kubenswrapper[4765]: I0121 13:15:47.559134 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22" Jan 21 13:15:47 crc kubenswrapper[4765]: I0121 13:15:47.579956 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-thmzs" podStartSLOduration=4.326657296 podStartE2EDuration="9.579928292s" podCreationTimestamp="2026-01-21 13:15:38 +0000 UTC" firstStartedPulling="2026-01-21 13:15:41.862104166 +0000 UTC m=+802.879829988" lastFinishedPulling="2026-01-21 13:15:47.115375162 +0000 UTC m=+808.133100984" observedRunningTime="2026-01-21 13:15:47.576849051 +0000 UTC m=+808.594574893" watchObservedRunningTime="2026-01-21 13:15:47.579928292 +0000 UTC m=+808.597654114" Jan 21 13:15:49 crc kubenswrapper[4765]: I0121 13:15:49.012578 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:15:49 crc kubenswrapper[4765]: I0121 13:15:49.014468 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:15:50 crc kubenswrapper[4765]: I0121 13:15:50.048858 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-thmzs" podUID="7327f338-11e9-4d75-bcd4-aa62c4e1c830" containerName="registry-server" probeResult="failure" output=< Jan 21 13:15:50 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Jan 21 13:15:50 crc kubenswrapper[4765]: > Jan 21 13:15:52 crc kubenswrapper[4765]: I0121 13:15:52.810227 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-fhpqb"] Jan 21 13:15:52 crc kubenswrapper[4765]: E0121 13:15:52.810964 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e5ceb6-2341-4976-8588-ecdd97e94b29" containerName="pull" Jan 21 13:15:52 crc kubenswrapper[4765]: I0121 13:15:52.810986 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e5ceb6-2341-4976-8588-ecdd97e94b29" containerName="pull" Jan 21 13:15:52 crc kubenswrapper[4765]: E0121 13:15:52.811015 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e5ceb6-2341-4976-8588-ecdd97e94b29" containerName="util" Jan 21 13:15:52 crc kubenswrapper[4765]: I0121 13:15:52.811023 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e5ceb6-2341-4976-8588-ecdd97e94b29" containerName="util" Jan 21 13:15:52 crc kubenswrapper[4765]: E0121 13:15:52.811038 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e5ceb6-2341-4976-8588-ecdd97e94b29" containerName="extract" Jan 21 13:15:52 crc kubenswrapper[4765]: I0121 13:15:52.811047 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e5ceb6-2341-4976-8588-ecdd97e94b29" containerName="extract" Jan 21 13:15:52 crc kubenswrapper[4765]: I0121 13:15:52.811168 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e5ceb6-2341-4976-8588-ecdd97e94b29" containerName="extract" Jan 21 13:15:52 crc kubenswrapper[4765]: I0121 13:15:52.811756 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-fhpqb" Jan 21 13:15:52 crc kubenswrapper[4765]: I0121 13:15:52.814511 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-ds8vh" Jan 21 13:15:52 crc kubenswrapper[4765]: I0121 13:15:52.814838 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 13:15:52 crc kubenswrapper[4765]: I0121 13:15:52.816980 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 13:15:52 crc kubenswrapper[4765]: I0121 13:15:52.829575 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-fhpqb"] Jan 21 13:15:52 crc kubenswrapper[4765]: I0121 13:15:52.979795 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zchvf\" (UniqueName: \"kubernetes.io/projected/26e746e8-47b5-4944-957d-5d43a89b207b-kube-api-access-zchvf\") pod \"nmstate-operator-646758c888-fhpqb\" (UID: \"26e746e8-47b5-4944-957d-5d43a89b207b\") " pod="openshift-nmstate/nmstate-operator-646758c888-fhpqb" Jan 21 13:15:53 crc kubenswrapper[4765]: I0121 13:15:53.081562 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zchvf\" (UniqueName: \"kubernetes.io/projected/26e746e8-47b5-4944-957d-5d43a89b207b-kube-api-access-zchvf\") pod \"nmstate-operator-646758c888-fhpqb\" (UID: \"26e746e8-47b5-4944-957d-5d43a89b207b\") " pod="openshift-nmstate/nmstate-operator-646758c888-fhpqb" Jan 21 13:15:53 crc kubenswrapper[4765]: I0121 13:15:53.104137 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zchvf\" (UniqueName: \"kubernetes.io/projected/26e746e8-47b5-4944-957d-5d43a89b207b-kube-api-access-zchvf\") pod \"nmstate-operator-646758c888-fhpqb\" (UID: \"26e746e8-47b5-4944-957d-5d43a89b207b\") " pod="openshift-nmstate/nmstate-operator-646758c888-fhpqb" Jan 21 13:15:53 crc kubenswrapper[4765]: I0121 13:15:53.130895 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-fhpqb" Jan 21 13:15:53 crc kubenswrapper[4765]: I0121 13:15:53.548848 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-fhpqb"] Jan 21 13:15:53 crc kubenswrapper[4765]: I0121 13:15:53.596105 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-fhpqb" event={"ID":"26e746e8-47b5-4944-957d-5d43a89b207b","Type":"ContainerStarted","Data":"f33ad3579ed0b35f76d2453398bd1db7b0108ddafeb7993045a21314cae337db"} Jan 21 13:15:56 crc kubenswrapper[4765]: I0121 13:15:56.623048 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-fhpqb" event={"ID":"26e746e8-47b5-4944-957d-5d43a89b207b","Type":"ContainerStarted","Data":"d40d208d933467dc8e95642dde14d54af8cac4b44a3fec564487c7ecc4143abf"} Jan 21 13:15:56 crc kubenswrapper[4765]: I0121 13:15:56.665251 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-fhpqb" podStartSLOduration=2.402392044 podStartE2EDuration="4.66520471s" podCreationTimestamp="2026-01-21 13:15:52 +0000 UTC" firstStartedPulling="2026-01-21 13:15:53.570102632 +0000 UTC m=+814.587828454" lastFinishedPulling="2026-01-21 13:15:55.832915308 +0000 UTC m=+816.850641120" observedRunningTime="2026-01-21 13:15:56.656895753 +0000 UTC m=+817.674621585" watchObservedRunningTime="2026-01-21 13:15:56.66520471 +0000 UTC m=+817.682930552" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.605060 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b2d62"] Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.606447 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2d62" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.608237 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-cf5c2" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.652983 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b2d62"] Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.663640 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n"] Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.664650 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.672561 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.673549 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lbjjz"] Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.674452 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.685881 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n"] Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.754140 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5cck\" (UniqueName: \"kubernetes.io/projected/7d962382-89ac-40cc-92b2-0bb0a8cecc4d-kube-api-access-w5cck\") pod \"nmstate-metrics-54757c584b-b2d62\" (UID: \"7d962382-89ac-40cc-92b2-0bb0a8cecc4d\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b2d62" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.754197 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a847c8c4-dd77-4cd8-9e06-5adb119c43fc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lmj8n\" (UID: \"a847c8c4-dd77-4cd8-9e06-5adb119c43fc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.754241 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvshn\" (UniqueName: \"kubernetes.io/projected/a847c8c4-dd77-4cd8-9e06-5adb119c43fc-kube-api-access-kvshn\") pod \"nmstate-webhook-8474b5b9d8-lmj8n\" (UID: \"a847c8c4-dd77-4cd8-9e06-5adb119c43fc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.754274 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5mc6\" (UniqueName: \"kubernetes.io/projected/0da8e178-dbab-4c9c-9e7a-503796386d6f-kube-api-access-f5mc6\") pod \"nmstate-handler-lbjjz\" (UID: \"0da8e178-dbab-4c9c-9e7a-503796386d6f\") " pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.754298 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0da8e178-dbab-4c9c-9e7a-503796386d6f-ovs-socket\") pod \"nmstate-handler-lbjjz\" (UID: \"0da8e178-dbab-4c9c-9e7a-503796386d6f\") " pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.754382 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0da8e178-dbab-4c9c-9e7a-503796386d6f-dbus-socket\") pod \"nmstate-handler-lbjjz\" (UID: \"0da8e178-dbab-4c9c-9e7a-503796386d6f\") " pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.754403 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0da8e178-dbab-4c9c-9e7a-503796386d6f-nmstate-lock\") pod \"nmstate-handler-lbjjz\" (UID: \"0da8e178-dbab-4c9c-9e7a-503796386d6f\") " pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.846154 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc"] Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.847230 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.850236 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.850305 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.850475 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-7xjcw" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.856130 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0da8e178-dbab-4c9c-9e7a-503796386d6f-dbus-socket\") pod \"nmstate-handler-lbjjz\" (UID: \"0da8e178-dbab-4c9c-9e7a-503796386d6f\") " pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.856176 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0da8e178-dbab-4c9c-9e7a-503796386d6f-nmstate-lock\") pod \"nmstate-handler-lbjjz\" (UID: \"0da8e178-dbab-4c9c-9e7a-503796386d6f\") " pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.856233 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5cck\" (UniqueName: \"kubernetes.io/projected/7d962382-89ac-40cc-92b2-0bb0a8cecc4d-kube-api-access-w5cck\") pod \"nmstate-metrics-54757c584b-b2d62\" (UID: \"7d962382-89ac-40cc-92b2-0bb0a8cecc4d\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b2d62" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.856274 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a847c8c4-dd77-4cd8-9e06-5adb119c43fc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lmj8n\" (UID: \"a847c8c4-dd77-4cd8-9e06-5adb119c43fc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.856305 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvshn\" (UniqueName: \"kubernetes.io/projected/a847c8c4-dd77-4cd8-9e06-5adb119c43fc-kube-api-access-kvshn\") pod \"nmstate-webhook-8474b5b9d8-lmj8n\" (UID: \"a847c8c4-dd77-4cd8-9e06-5adb119c43fc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.856337 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0da8e178-dbab-4c9c-9e7a-503796386d6f-ovs-socket\") pod \"nmstate-handler-lbjjz\" (UID: \"0da8e178-dbab-4c9c-9e7a-503796386d6f\") " pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.856361 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5mc6\" (UniqueName: \"kubernetes.io/projected/0da8e178-dbab-4c9c-9e7a-503796386d6f-kube-api-access-f5mc6\") pod \"nmstate-handler-lbjjz\" (UID: \"0da8e178-dbab-4c9c-9e7a-503796386d6f\") " pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.856730 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0da8e178-dbab-4c9c-9e7a-503796386d6f-ovs-socket\") pod \"nmstate-handler-lbjjz\" (UID: \"0da8e178-dbab-4c9c-9e7a-503796386d6f\") " pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.856818 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0da8e178-dbab-4c9c-9e7a-503796386d6f-nmstate-lock\") pod \"nmstate-handler-lbjjz\" (UID: \"0da8e178-dbab-4c9c-9e7a-503796386d6f\") " pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.856956 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0da8e178-dbab-4c9c-9e7a-503796386d6f-dbus-socket\") pod \"nmstate-handler-lbjjz\" (UID: \"0da8e178-dbab-4c9c-9e7a-503796386d6f\") " pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.857147 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc"] Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.876983 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a847c8c4-dd77-4cd8-9e06-5adb119c43fc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lmj8n\" (UID: \"a847c8c4-dd77-4cd8-9e06-5adb119c43fc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.884451 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvshn\" (UniqueName: \"kubernetes.io/projected/a847c8c4-dd77-4cd8-9e06-5adb119c43fc-kube-api-access-kvshn\") pod \"nmstate-webhook-8474b5b9d8-lmj8n\" (UID: \"a847c8c4-dd77-4cd8-9e06-5adb119c43fc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.889596 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5cck\" (UniqueName: \"kubernetes.io/projected/7d962382-89ac-40cc-92b2-0bb0a8cecc4d-kube-api-access-w5cck\") pod \"nmstate-metrics-54757c584b-b2d62\" (UID: \"7d962382-89ac-40cc-92b2-0bb0a8cecc4d\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b2d62" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.893627 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5mc6\" (UniqueName: \"kubernetes.io/projected/0da8e178-dbab-4c9c-9e7a-503796386d6f-kube-api-access-f5mc6\") pod \"nmstate-handler-lbjjz\" (UID: \"0da8e178-dbab-4c9c-9e7a-503796386d6f\") " pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.924727 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2d62" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.957800 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/79ffb165-f80d-428c-a29e-998f1a119cd7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kgmtc\" (UID: \"79ffb165-f80d-428c-a29e-998f1a119cd7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.957872 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/79ffb165-f80d-428c-a29e-998f1a119cd7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kgmtc\" (UID: \"79ffb165-f80d-428c-a29e-998f1a119cd7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.957936 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbr8k\" (UniqueName: \"kubernetes.io/projected/79ffb165-f80d-428c-a29e-998f1a119cd7-kube-api-access-dbr8k\") pod \"nmstate-console-plugin-7754f76f8b-kgmtc\" (UID: \"79ffb165-f80d-428c-a29e-998f1a119cd7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc" Jan 21 13:15:57 crc kubenswrapper[4765]: I0121 13:15:57.993006 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.017045 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.061252 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbr8k\" (UniqueName: \"kubernetes.io/projected/79ffb165-f80d-428c-a29e-998f1a119cd7-kube-api-access-dbr8k\") pod \"nmstate-console-plugin-7754f76f8b-kgmtc\" (UID: \"79ffb165-f80d-428c-a29e-998f1a119cd7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.061348 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/79ffb165-f80d-428c-a29e-998f1a119cd7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kgmtc\" (UID: \"79ffb165-f80d-428c-a29e-998f1a119cd7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.061460 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/79ffb165-f80d-428c-a29e-998f1a119cd7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kgmtc\" (UID: \"79ffb165-f80d-428c-a29e-998f1a119cd7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc" Jan 21 13:15:58 crc kubenswrapper[4765]: E0121 13:15:58.061619 4765 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 21 13:15:58 crc kubenswrapper[4765]: E0121 13:15:58.061683 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79ffb165-f80d-428c-a29e-998f1a119cd7-plugin-serving-cert podName:79ffb165-f80d-428c-a29e-998f1a119cd7 nodeName:}" failed. No retries permitted until 2026-01-21 13:15:58.561659735 +0000 UTC m=+819.579385557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/79ffb165-f80d-428c-a29e-998f1a119cd7-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-kgmtc" (UID: "79ffb165-f80d-428c-a29e-998f1a119cd7") : secret "plugin-serving-cert" not found Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.064355 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/79ffb165-f80d-428c-a29e-998f1a119cd7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kgmtc\" (UID: \"79ffb165-f80d-428c-a29e-998f1a119cd7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.100588 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbr8k\" (UniqueName: \"kubernetes.io/projected/79ffb165-f80d-428c-a29e-998f1a119cd7-kube-api-access-dbr8k\") pod \"nmstate-console-plugin-7754f76f8b-kgmtc\" (UID: \"79ffb165-f80d-428c-a29e-998f1a119cd7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.110691 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7cffbc547c-vz6f8"] Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.121831 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.139981 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cffbc547c-vz6f8"] Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.266037 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-service-ca\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.266086 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-console-serving-cert\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.266112 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-console-config\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.266140 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-console-oauth-config\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.266169 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc4z2\" (UniqueName: \"kubernetes.io/projected/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-kube-api-access-wc4z2\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.266254 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-trusted-ca-bundle\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.266281 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-oauth-serving-cert\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.300123 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b2d62"] Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.367838 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-trusted-ca-bundle\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.367891 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-oauth-serving-cert\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.367919 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-service-ca\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.367938 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-console-serving-cert\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.367958 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-console-config\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.367982 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-console-oauth-config\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.368008 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc4z2\" (UniqueName: \"kubernetes.io/projected/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-kube-api-access-wc4z2\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.369198 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-console-config\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.369475 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-oauth-serving-cert\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.370708 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-service-ca\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.373906 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-trusted-ca-bundle\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.376173 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-console-serving-cert\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.380255 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-console-oauth-config\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.391956 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc4z2\" (UniqueName: \"kubernetes.io/projected/a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1-kube-api-access-wc4z2\") pod \"console-7cffbc547c-vz6f8\" (UID: \"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1\") " pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.464833 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.571704 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/79ffb165-f80d-428c-a29e-998f1a119cd7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kgmtc\" (UID: \"79ffb165-f80d-428c-a29e-998f1a119cd7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.576802 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/79ffb165-f80d-428c-a29e-998f1a119cd7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kgmtc\" (UID: \"79ffb165-f80d-428c-a29e-998f1a119cd7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc" Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.600730 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n"] Jan 21 13:15:58 crc kubenswrapper[4765]: W0121 13:15:58.605250 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda847c8c4_dd77_4cd8_9e06_5adb119c43fc.slice/crio-375da04e51ab69555089ba67e41834fd2dbc2c2803ff1e4851c64207544553f3 WatchSource:0}: Error finding container 375da04e51ab69555089ba67e41834fd2dbc2c2803ff1e4851c64207544553f3: Status 404 returned error can't find the container with id 375da04e51ab69555089ba67e41834fd2dbc2c2803ff1e4851c64207544553f3 Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.640015 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2d62" event={"ID":"7d962382-89ac-40cc-92b2-0bb0a8cecc4d","Type":"ContainerStarted","Data":"865d198b73851c0040b6ce64167f88777c8735b582051bd54142fb19670c9cde"} Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.641626 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n" event={"ID":"a847c8c4-dd77-4cd8-9e06-5adb119c43fc","Type":"ContainerStarted","Data":"375da04e51ab69555089ba67e41834fd2dbc2c2803ff1e4851c64207544553f3"} Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.644052 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lbjjz" event={"ID":"0da8e178-dbab-4c9c-9e7a-503796386d6f","Type":"ContainerStarted","Data":"ad84fb528c8ca3c581f4c7f93f1b31e7dbe0171137b0d12fbdb59f3a1fbdee7e"} Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.654891 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7cffbc547c-vz6f8"] Jan 21 13:15:58 crc kubenswrapper[4765]: W0121 13:15:58.659293 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5c4d1c1_7c86_4ee5_b47a_d11e8a5ac5d1.slice/crio-160b0734ba476f6894ca0bfe04466992d6411b9ba95d7a6c3d3fde83c7a2778d WatchSource:0}: Error finding container 160b0734ba476f6894ca0bfe04466992d6411b9ba95d7a6c3d3fde83c7a2778d: Status 404 returned error can't find the container with id 160b0734ba476f6894ca0bfe04466992d6411b9ba95d7a6c3d3fde83c7a2778d Jan 21 13:15:58 crc kubenswrapper[4765]: I0121 13:15:58.796749 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc" Jan 21 13:15:59 crc kubenswrapper[4765]: I0121 13:15:59.069337 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:15:59 crc kubenswrapper[4765]: I0121 13:15:59.118995 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:15:59 crc kubenswrapper[4765]: W0121 13:15:59.246894 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79ffb165_f80d_428c_a29e_998f1a119cd7.slice/crio-0b13954a5de5b73b5c37e2e46f2e8a131998a6bd2b5876124ccfbeaecba1f454 WatchSource:0}: Error finding container 0b13954a5de5b73b5c37e2e46f2e8a131998a6bd2b5876124ccfbeaecba1f454: Status 404 returned error can't find the container with id 0b13954a5de5b73b5c37e2e46f2e8a131998a6bd2b5876124ccfbeaecba1f454 Jan 21 13:15:59 crc kubenswrapper[4765]: I0121 13:15:59.249371 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc"] Jan 21 13:15:59 crc kubenswrapper[4765]: I0121 13:15:59.658077 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cffbc547c-vz6f8" event={"ID":"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1","Type":"ContainerStarted","Data":"160b0734ba476f6894ca0bfe04466992d6411b9ba95d7a6c3d3fde83c7a2778d"} Jan 21 13:15:59 crc kubenswrapper[4765]: I0121 13:15:59.665735 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc" event={"ID":"79ffb165-f80d-428c-a29e-998f1a119cd7","Type":"ContainerStarted","Data":"0b13954a5de5b73b5c37e2e46f2e8a131998a6bd2b5876124ccfbeaecba1f454"} Jan 21 13:16:00 crc kubenswrapper[4765]: I0121 13:16:00.681318 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7cffbc547c-vz6f8" event={"ID":"a5c4d1c1-7c86-4ee5-b47a-d11e8a5ac5d1","Type":"ContainerStarted","Data":"b4943f2fa5e916b145a6055df304b08e22e0bef285ebf7f0fa45e40a0624318a"} Jan 21 13:16:00 crc kubenswrapper[4765]: I0121 13:16:00.701938 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-thmzs"] Jan 21 13:16:00 crc kubenswrapper[4765]: I0121 13:16:00.702272 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-thmzs" podUID="7327f338-11e9-4d75-bcd4-aa62c4e1c830" containerName="registry-server" containerID="cri-o://26b806f339a978519b931c8e28e10cd4a3da66ba61fda9b120896241ecfb65fa" gracePeriod=2 Jan 21 13:16:00 crc kubenswrapper[4765]: I0121 13:16:00.710574 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7cffbc547c-vz6f8" podStartSLOduration=2.710533057 podStartE2EDuration="2.710533057s" podCreationTimestamp="2026-01-21 13:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:16:00.699395276 +0000 UTC m=+821.717121118" watchObservedRunningTime="2026-01-21 13:16:00.710533057 +0000 UTC m=+821.728258879" Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.147646 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.221448 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7327f338-11e9-4d75-bcd4-aa62c4e1c830-utilities\") pod \"7327f338-11e9-4d75-bcd4-aa62c4e1c830\" (UID: \"7327f338-11e9-4d75-bcd4-aa62c4e1c830\") " Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.221514 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdfkt\" (UniqueName: \"kubernetes.io/projected/7327f338-11e9-4d75-bcd4-aa62c4e1c830-kube-api-access-jdfkt\") pod \"7327f338-11e9-4d75-bcd4-aa62c4e1c830\" (UID: \"7327f338-11e9-4d75-bcd4-aa62c4e1c830\") " Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.221618 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7327f338-11e9-4d75-bcd4-aa62c4e1c830-catalog-content\") pod \"7327f338-11e9-4d75-bcd4-aa62c4e1c830\" (UID: \"7327f338-11e9-4d75-bcd4-aa62c4e1c830\") " Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.222828 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7327f338-11e9-4d75-bcd4-aa62c4e1c830-utilities" (OuterVolumeSpecName: "utilities") pod "7327f338-11e9-4d75-bcd4-aa62c4e1c830" (UID: "7327f338-11e9-4d75-bcd4-aa62c4e1c830"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.245026 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7327f338-11e9-4d75-bcd4-aa62c4e1c830-kube-api-access-jdfkt" (OuterVolumeSpecName: "kube-api-access-jdfkt") pod "7327f338-11e9-4d75-bcd4-aa62c4e1c830" (UID: "7327f338-11e9-4d75-bcd4-aa62c4e1c830"). InnerVolumeSpecName "kube-api-access-jdfkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.324198 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7327f338-11e9-4d75-bcd4-aa62c4e1c830-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.324251 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdfkt\" (UniqueName: \"kubernetes.io/projected/7327f338-11e9-4d75-bcd4-aa62c4e1c830-kube-api-access-jdfkt\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.352066 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7327f338-11e9-4d75-bcd4-aa62c4e1c830-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7327f338-11e9-4d75-bcd4-aa62c4e1c830" (UID: "7327f338-11e9-4d75-bcd4-aa62c4e1c830"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.425678 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7327f338-11e9-4d75-bcd4-aa62c4e1c830-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.693149 4765 generic.go:334] "Generic (PLEG): container finished" podID="7327f338-11e9-4d75-bcd4-aa62c4e1c830" containerID="26b806f339a978519b931c8e28e10cd4a3da66ba61fda9b120896241ecfb65fa" exitCode=0 Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.693243 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thmzs" Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.693248 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thmzs" event={"ID":"7327f338-11e9-4d75-bcd4-aa62c4e1c830","Type":"ContainerDied","Data":"26b806f339a978519b931c8e28e10cd4a3da66ba61fda9b120896241ecfb65fa"} Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.693801 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thmzs" event={"ID":"7327f338-11e9-4d75-bcd4-aa62c4e1c830","Type":"ContainerDied","Data":"6a64a90bcf24479b22accbc439b0ddd4d01747f03a2d6d0769f8d98a5b2641cf"} Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.693833 4765 scope.go:117] "RemoveContainer" containerID="26b806f339a978519b931c8e28e10cd4a3da66ba61fda9b120896241ecfb65fa" Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.722220 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-thmzs"] Jan 21 13:16:01 crc kubenswrapper[4765]: I0121 13:16:01.729594 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-thmzs"] Jan 21 13:16:03 crc kubenswrapper[4765]: I0121 13:16:03.622196 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7327f338-11e9-4d75-bcd4-aa62c4e1c830" path="/var/lib/kubelet/pods/7327f338-11e9-4d75-bcd4-aa62c4e1c830/volumes" Jan 21 13:16:03 crc kubenswrapper[4765]: I0121 13:16:03.802516 4765 scope.go:117] "RemoveContainer" containerID="99959ab4cb014ca6da299fb9d55ded54a8ed89ed6b884ffd2a19ad99bf73288a" Jan 21 13:16:03 crc kubenswrapper[4765]: I0121 13:16:03.844391 4765 scope.go:117] "RemoveContainer" containerID="c49b3a1afc6d3e9fee1d1b65467b87839fa39f6f51440ba272cedde76b58c271" Jan 21 13:16:03 crc kubenswrapper[4765]: I0121 13:16:03.881520 4765 scope.go:117] "RemoveContainer" containerID="26b806f339a978519b931c8e28e10cd4a3da66ba61fda9b120896241ecfb65fa" Jan 21 13:16:03 crc kubenswrapper[4765]: E0121 13:16:03.882421 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26b806f339a978519b931c8e28e10cd4a3da66ba61fda9b120896241ecfb65fa\": container with ID starting with 26b806f339a978519b931c8e28e10cd4a3da66ba61fda9b120896241ecfb65fa not found: ID does not exist" containerID="26b806f339a978519b931c8e28e10cd4a3da66ba61fda9b120896241ecfb65fa" Jan 21 13:16:03 crc kubenswrapper[4765]: I0121 13:16:03.882472 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26b806f339a978519b931c8e28e10cd4a3da66ba61fda9b120896241ecfb65fa"} err="failed to get container status \"26b806f339a978519b931c8e28e10cd4a3da66ba61fda9b120896241ecfb65fa\": rpc error: code = NotFound desc = could not find container \"26b806f339a978519b931c8e28e10cd4a3da66ba61fda9b120896241ecfb65fa\": container with ID starting with 26b806f339a978519b931c8e28e10cd4a3da66ba61fda9b120896241ecfb65fa not found: ID does not exist" Jan 21 13:16:03 crc kubenswrapper[4765]: I0121 13:16:03.882503 4765 scope.go:117] "RemoveContainer" containerID="99959ab4cb014ca6da299fb9d55ded54a8ed89ed6b884ffd2a19ad99bf73288a" Jan 21 13:16:03 crc kubenswrapper[4765]: E0121 13:16:03.883452 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99959ab4cb014ca6da299fb9d55ded54a8ed89ed6b884ffd2a19ad99bf73288a\": container with ID starting with 99959ab4cb014ca6da299fb9d55ded54a8ed89ed6b884ffd2a19ad99bf73288a not found: ID does not exist" containerID="99959ab4cb014ca6da299fb9d55ded54a8ed89ed6b884ffd2a19ad99bf73288a" Jan 21 13:16:03 crc kubenswrapper[4765]: I0121 13:16:03.883499 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99959ab4cb014ca6da299fb9d55ded54a8ed89ed6b884ffd2a19ad99bf73288a"} err="failed to get container status \"99959ab4cb014ca6da299fb9d55ded54a8ed89ed6b884ffd2a19ad99bf73288a\": rpc error: code = NotFound desc = could not find container \"99959ab4cb014ca6da299fb9d55ded54a8ed89ed6b884ffd2a19ad99bf73288a\": container with ID starting with 99959ab4cb014ca6da299fb9d55ded54a8ed89ed6b884ffd2a19ad99bf73288a not found: ID does not exist" Jan 21 13:16:03 crc kubenswrapper[4765]: I0121 13:16:03.883534 4765 scope.go:117] "RemoveContainer" containerID="c49b3a1afc6d3e9fee1d1b65467b87839fa39f6f51440ba272cedde76b58c271" Jan 21 13:16:03 crc kubenswrapper[4765]: E0121 13:16:03.883857 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c49b3a1afc6d3e9fee1d1b65467b87839fa39f6f51440ba272cedde76b58c271\": container with ID starting with c49b3a1afc6d3e9fee1d1b65467b87839fa39f6f51440ba272cedde76b58c271 not found: ID does not exist" containerID="c49b3a1afc6d3e9fee1d1b65467b87839fa39f6f51440ba272cedde76b58c271" Jan 21 13:16:03 crc kubenswrapper[4765]: I0121 13:16:03.883891 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c49b3a1afc6d3e9fee1d1b65467b87839fa39f6f51440ba272cedde76b58c271"} err="failed to get container status \"c49b3a1afc6d3e9fee1d1b65467b87839fa39f6f51440ba272cedde76b58c271\": rpc error: code = NotFound desc = could not find container \"c49b3a1afc6d3e9fee1d1b65467b87839fa39f6f51440ba272cedde76b58c271\": container with ID starting with c49b3a1afc6d3e9fee1d1b65467b87839fa39f6f51440ba272cedde76b58c271 not found: ID does not exist" Jan 21 13:16:04 crc kubenswrapper[4765]: I0121 13:16:04.715770 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n" event={"ID":"a847c8c4-dd77-4cd8-9e06-5adb119c43fc","Type":"ContainerStarted","Data":"4e759b432f613f9d40e41b6ec9d35ddddfdf7d763d3099bd2bc40317baeaaf8d"} Jan 21 13:16:04 crc kubenswrapper[4765]: I0121 13:16:04.716380 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n" Jan 21 13:16:04 crc kubenswrapper[4765]: I0121 13:16:04.725117 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lbjjz" event={"ID":"0da8e178-dbab-4c9c-9e7a-503796386d6f","Type":"ContainerStarted","Data":"926b17e64ffae5a74774bd4635777852ff185da9d5fd5893c4559a307edc718f"} Jan 21 13:16:04 crc kubenswrapper[4765]: I0121 13:16:04.725581 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:16:04 crc kubenswrapper[4765]: I0121 13:16:04.730911 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc" event={"ID":"79ffb165-f80d-428c-a29e-998f1a119cd7","Type":"ContainerStarted","Data":"6dd89476bfc40df65866171f8ed127fe49bf1405e60dc490898901fd5d144358"} Jan 21 13:16:04 crc kubenswrapper[4765]: I0121 13:16:04.736912 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2d62" event={"ID":"7d962382-89ac-40cc-92b2-0bb0a8cecc4d","Type":"ContainerStarted","Data":"a61b7829c472d879112b18fa47770602171b3e8d083d4fc70866f46b45a8c323"} Jan 21 13:16:04 crc kubenswrapper[4765]: I0121 13:16:04.742508 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n" podStartSLOduration=2.5034462939999997 podStartE2EDuration="7.742487597s" podCreationTimestamp="2026-01-21 13:15:57 +0000 UTC" firstStartedPulling="2026-01-21 13:15:58.607645919 +0000 UTC m=+819.625371741" lastFinishedPulling="2026-01-21 13:16:03.846687222 +0000 UTC m=+824.864413044" observedRunningTime="2026-01-21 13:16:04.736319604 +0000 UTC m=+825.754045426" watchObservedRunningTime="2026-01-21 13:16:04.742487597 +0000 UTC m=+825.760213419" Jan 21 13:16:04 crc kubenswrapper[4765]: I0121 13:16:04.768653 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lbjjz" podStartSLOduration=1.9671667579999998 podStartE2EDuration="7.768625182s" podCreationTimestamp="2026-01-21 13:15:57 +0000 UTC" firstStartedPulling="2026-01-21 13:15:58.052807283 +0000 UTC m=+819.070533105" lastFinishedPulling="2026-01-21 13:16:03.854265707 +0000 UTC m=+824.871991529" observedRunningTime="2026-01-21 13:16:04.762192731 +0000 UTC m=+825.779918563" watchObservedRunningTime="2026-01-21 13:16:04.768625182 +0000 UTC m=+825.786351024" Jan 21 13:16:04 crc kubenswrapper[4765]: I0121 13:16:04.779411 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kgmtc" podStartSLOduration=3.182717051 podStartE2EDuration="7.779384851s" podCreationTimestamp="2026-01-21 13:15:57 +0000 UTC" firstStartedPulling="2026-01-21 13:15:59.249308641 +0000 UTC m=+820.267034463" lastFinishedPulling="2026-01-21 13:16:03.845976451 +0000 UTC m=+824.863702263" observedRunningTime="2026-01-21 13:16:04.776197106 +0000 UTC m=+825.793922928" watchObservedRunningTime="2026-01-21 13:16:04.779384851 +0000 UTC m=+825.797110673" Jan 21 13:16:08 crc kubenswrapper[4765]: I0121 13:16:08.465893 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:16:08 crc kubenswrapper[4765]: I0121 13:16:08.466244 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:16:08 crc kubenswrapper[4765]: I0121 13:16:08.473389 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:16:08 crc kubenswrapper[4765]: I0121 13:16:08.768370 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7cffbc547c-vz6f8" Jan 21 13:16:08 crc kubenswrapper[4765]: I0121 13:16:08.834806 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-l7658"] Jan 21 13:16:09 crc kubenswrapper[4765]: I0121 13:16:09.774464 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2d62" event={"ID":"7d962382-89ac-40cc-92b2-0bb0a8cecc4d","Type":"ContainerStarted","Data":"add85656b6f25f9cf8f8f9138bd37fe0f57ac3677761e79c8bdd7bdb0aeab19f"} Jan 21 13:16:09 crc kubenswrapper[4765]: I0121 13:16:09.811862 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2d62" podStartSLOduration=2.487894002 podStartE2EDuration="12.811800507s" podCreationTimestamp="2026-01-21 13:15:57 +0000 UTC" firstStartedPulling="2026-01-21 13:15:58.311019916 +0000 UTC m=+819.328745738" lastFinishedPulling="2026-01-21 13:16:08.634926421 +0000 UTC m=+829.652652243" observedRunningTime="2026-01-21 13:16:09.796778752 +0000 UTC m=+830.814504574" watchObservedRunningTime="2026-01-21 13:16:09.811800507 +0000 UTC m=+830.829526329" Jan 21 13:16:13 crc kubenswrapper[4765]: I0121 13:16:13.049250 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lbjjz" Jan 21 13:16:18 crc kubenswrapper[4765]: I0121 13:16:18.003992 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lmj8n" Jan 21 13:16:31 crc kubenswrapper[4765]: I0121 13:16:31.876756 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w"] Jan 21 13:16:31 crc kubenswrapper[4765]: E0121 13:16:31.878795 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7327f338-11e9-4d75-bcd4-aa62c4e1c830" containerName="extract-utilities" Jan 21 13:16:31 crc kubenswrapper[4765]: I0121 13:16:31.878880 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7327f338-11e9-4d75-bcd4-aa62c4e1c830" containerName="extract-utilities" Jan 21 13:16:31 crc kubenswrapper[4765]: E0121 13:16:31.878936 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7327f338-11e9-4d75-bcd4-aa62c4e1c830" containerName="extract-content" Jan 21 13:16:31 crc kubenswrapper[4765]: I0121 13:16:31.878990 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7327f338-11e9-4d75-bcd4-aa62c4e1c830" containerName="extract-content" Jan 21 13:16:31 crc kubenswrapper[4765]: E0121 13:16:31.879043 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7327f338-11e9-4d75-bcd4-aa62c4e1c830" containerName="registry-server" Jan 21 13:16:31 crc kubenswrapper[4765]: I0121 13:16:31.879093 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7327f338-11e9-4d75-bcd4-aa62c4e1c830" containerName="registry-server" Jan 21 13:16:31 crc kubenswrapper[4765]: I0121 13:16:31.879308 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="7327f338-11e9-4d75-bcd4-aa62c4e1c830" containerName="registry-server" Jan 21 13:16:31 crc kubenswrapper[4765]: I0121 13:16:31.880325 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" Jan 21 13:16:31 crc kubenswrapper[4765]: I0121 13:16:31.882742 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 13:16:31 crc kubenswrapper[4765]: I0121 13:16:31.894110 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w"] Jan 21 13:16:31 crc kubenswrapper[4765]: I0121 13:16:31.991919 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d73b65cf-eba0-49dd-81ad-0fb0431092b8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w\" (UID: \"d73b65cf-eba0-49dd-81ad-0fb0431092b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" Jan 21 13:16:31 crc kubenswrapper[4765]: I0121 13:16:31.992019 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq9dv\" (UniqueName: \"kubernetes.io/projected/d73b65cf-eba0-49dd-81ad-0fb0431092b8-kube-api-access-kq9dv\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w\" (UID: \"d73b65cf-eba0-49dd-81ad-0fb0431092b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" Jan 21 13:16:31 crc kubenswrapper[4765]: I0121 13:16:31.992070 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d73b65cf-eba0-49dd-81ad-0fb0431092b8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w\" (UID: \"d73b65cf-eba0-49dd-81ad-0fb0431092b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" Jan 21 13:16:32 crc kubenswrapper[4765]: I0121 13:16:32.093342 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d73b65cf-eba0-49dd-81ad-0fb0431092b8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w\" (UID: \"d73b65cf-eba0-49dd-81ad-0fb0431092b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" Jan 21 13:16:32 crc kubenswrapper[4765]: I0121 13:16:32.093467 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq9dv\" (UniqueName: \"kubernetes.io/projected/d73b65cf-eba0-49dd-81ad-0fb0431092b8-kube-api-access-kq9dv\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w\" (UID: \"d73b65cf-eba0-49dd-81ad-0fb0431092b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" Jan 21 13:16:32 crc kubenswrapper[4765]: I0121 13:16:32.093497 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d73b65cf-eba0-49dd-81ad-0fb0431092b8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w\" (UID: \"d73b65cf-eba0-49dd-81ad-0fb0431092b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" Jan 21 13:16:32 crc kubenswrapper[4765]: I0121 13:16:32.094032 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d73b65cf-eba0-49dd-81ad-0fb0431092b8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w\" (UID: \"d73b65cf-eba0-49dd-81ad-0fb0431092b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" Jan 21 13:16:32 crc kubenswrapper[4765]: I0121 13:16:32.094054 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d73b65cf-eba0-49dd-81ad-0fb0431092b8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w\" (UID: \"d73b65cf-eba0-49dd-81ad-0fb0431092b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" Jan 21 13:16:32 crc kubenswrapper[4765]: I0121 13:16:32.117399 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq9dv\" (UniqueName: \"kubernetes.io/projected/d73b65cf-eba0-49dd-81ad-0fb0431092b8-kube-api-access-kq9dv\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w\" (UID: \"d73b65cf-eba0-49dd-81ad-0fb0431092b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" Jan 21 13:16:32 crc kubenswrapper[4765]: I0121 13:16:32.199616 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" Jan 21 13:16:32 crc kubenswrapper[4765]: I0121 13:16:32.630373 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w"] Jan 21 13:16:32 crc kubenswrapper[4765]: I0121 13:16:32.913253 4765 generic.go:334] "Generic (PLEG): container finished" podID="d73b65cf-eba0-49dd-81ad-0fb0431092b8" containerID="59b56ee63cafd4c936733faccfd03b56f561db6d0bee7faac12c08ef3985e735" exitCode=0 Jan 21 13:16:32 crc kubenswrapper[4765]: I0121 13:16:32.913307 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" event={"ID":"d73b65cf-eba0-49dd-81ad-0fb0431092b8","Type":"ContainerDied","Data":"59b56ee63cafd4c936733faccfd03b56f561db6d0bee7faac12c08ef3985e735"} Jan 21 13:16:32 crc kubenswrapper[4765]: I0121 13:16:32.913338 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" event={"ID":"d73b65cf-eba0-49dd-81ad-0fb0431092b8","Type":"ContainerStarted","Data":"18d883c62e89bc818e50ffcc22189e2253e0712c4b3f4b864cd6322d05dbb5a4"} Jan 21 13:16:33 crc kubenswrapper[4765]: I0121 13:16:33.873229 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-l7658" podUID="0be5f3b8-eeae-405b-a836-e806531a57e0" containerName="console" containerID="cri-o://2c4ca1e01035568324e9dc53a3f1b1c8ede91e249c6f3c0726820c09063f503d" gracePeriod=15 Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.255930 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-l7658_0be5f3b8-eeae-405b-a836-e806531a57e0/console/0.log" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.256025 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.427923 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0be5f3b8-eeae-405b-a836-e806531a57e0-console-oauth-config\") pod \"0be5f3b8-eeae-405b-a836-e806531a57e0\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.428509 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-trusted-ca-bundle\") pod \"0be5f3b8-eeae-405b-a836-e806531a57e0\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.428539 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvmgm\" (UniqueName: \"kubernetes.io/projected/0be5f3b8-eeae-405b-a836-e806531a57e0-kube-api-access-fvmgm\") pod \"0be5f3b8-eeae-405b-a836-e806531a57e0\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.428605 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-service-ca\") pod \"0be5f3b8-eeae-405b-a836-e806531a57e0\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.428697 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-console-config\") pod \"0be5f3b8-eeae-405b-a836-e806531a57e0\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.428776 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0be5f3b8-eeae-405b-a836-e806531a57e0-console-serving-cert\") pod \"0be5f3b8-eeae-405b-a836-e806531a57e0\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.428825 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-oauth-serving-cert\") pod \"0be5f3b8-eeae-405b-a836-e806531a57e0\" (UID: \"0be5f3b8-eeae-405b-a836-e806531a57e0\") " Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.429493 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0be5f3b8-eeae-405b-a836-e806531a57e0" (UID: "0be5f3b8-eeae-405b-a836-e806531a57e0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.429508 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-console-config" (OuterVolumeSpecName: "console-config") pod "0be5f3b8-eeae-405b-a836-e806531a57e0" (UID: "0be5f3b8-eeae-405b-a836-e806531a57e0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.429656 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-service-ca" (OuterVolumeSpecName: "service-ca") pod "0be5f3b8-eeae-405b-a836-e806531a57e0" (UID: "0be5f3b8-eeae-405b-a836-e806531a57e0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.429713 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0be5f3b8-eeae-405b-a836-e806531a57e0" (UID: "0be5f3b8-eeae-405b-a836-e806531a57e0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.435033 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be5f3b8-eeae-405b-a836-e806531a57e0-kube-api-access-fvmgm" (OuterVolumeSpecName: "kube-api-access-fvmgm") pod "0be5f3b8-eeae-405b-a836-e806531a57e0" (UID: "0be5f3b8-eeae-405b-a836-e806531a57e0"). InnerVolumeSpecName "kube-api-access-fvmgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.435514 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0be5f3b8-eeae-405b-a836-e806531a57e0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0be5f3b8-eeae-405b-a836-e806531a57e0" (UID: "0be5f3b8-eeae-405b-a836-e806531a57e0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.435694 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0be5f3b8-eeae-405b-a836-e806531a57e0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0be5f3b8-eeae-405b-a836-e806531a57e0" (UID: "0be5f3b8-eeae-405b-a836-e806531a57e0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.530474 4765 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.530540 4765 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.530555 4765 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0be5f3b8-eeae-405b-a836-e806531a57e0-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.530569 4765 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.530583 4765 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0be5f3b8-eeae-405b-a836-e806531a57e0-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.530595 4765 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0be5f3b8-eeae-405b-a836-e806531a57e0-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.530606 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvmgm\" (UniqueName: \"kubernetes.io/projected/0be5f3b8-eeae-405b-a836-e806531a57e0-kube-api-access-fvmgm\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.928963 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-l7658_0be5f3b8-eeae-405b-a836-e806531a57e0/console/0.log" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.929050 4765 generic.go:334] "Generic (PLEG): container finished" podID="0be5f3b8-eeae-405b-a836-e806531a57e0" containerID="2c4ca1e01035568324e9dc53a3f1b1c8ede91e249c6f3c0726820c09063f503d" exitCode=2 Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.929113 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l7658" event={"ID":"0be5f3b8-eeae-405b-a836-e806531a57e0","Type":"ContainerDied","Data":"2c4ca1e01035568324e9dc53a3f1b1c8ede91e249c6f3c0726820c09063f503d"} Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.929146 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l7658" event={"ID":"0be5f3b8-eeae-405b-a836-e806531a57e0","Type":"ContainerDied","Data":"28528cae8e5c9dc2b7b2bdda46ae05bd786b633a4867bcf0315a0b81f6776b86"} Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.929161 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l7658" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.929187 4765 scope.go:117] "RemoveContainer" containerID="2c4ca1e01035568324e9dc53a3f1b1c8ede91e249c6f3c0726820c09063f503d" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.950187 4765 scope.go:117] "RemoveContainer" containerID="2c4ca1e01035568324e9dc53a3f1b1c8ede91e249c6f3c0726820c09063f503d" Jan 21 13:16:34 crc kubenswrapper[4765]: E0121 13:16:34.950746 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c4ca1e01035568324e9dc53a3f1b1c8ede91e249c6f3c0726820c09063f503d\": container with ID starting with 2c4ca1e01035568324e9dc53a3f1b1c8ede91e249c6f3c0726820c09063f503d not found: ID does not exist" containerID="2c4ca1e01035568324e9dc53a3f1b1c8ede91e249c6f3c0726820c09063f503d" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.950799 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c4ca1e01035568324e9dc53a3f1b1c8ede91e249c6f3c0726820c09063f503d"} err="failed to get container status \"2c4ca1e01035568324e9dc53a3f1b1c8ede91e249c6f3c0726820c09063f503d\": rpc error: code = NotFound desc = could not find container \"2c4ca1e01035568324e9dc53a3f1b1c8ede91e249c6f3c0726820c09063f503d\": container with ID starting with 2c4ca1e01035568324e9dc53a3f1b1c8ede91e249c6f3c0726820c09063f503d not found: ID does not exist" Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.975801 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-l7658"] Jan 21 13:16:34 crc kubenswrapper[4765]: I0121 13:16:34.979189 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-l7658"] Jan 21 13:16:35 crc kubenswrapper[4765]: I0121 13:16:35.621373 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be5f3b8-eeae-405b-a836-e806531a57e0" path="/var/lib/kubelet/pods/0be5f3b8-eeae-405b-a836-e806531a57e0/volumes" Jan 21 13:16:36 crc kubenswrapper[4765]: I0121 13:16:36.947686 4765 generic.go:334] "Generic (PLEG): container finished" podID="d73b65cf-eba0-49dd-81ad-0fb0431092b8" containerID="761529ef6ed61ac7cb5c27c7ccf68e643ec6ade17ebadb64770e00856711a1a6" exitCode=0 Jan 21 13:16:36 crc kubenswrapper[4765]: I0121 13:16:36.947755 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" event={"ID":"d73b65cf-eba0-49dd-81ad-0fb0431092b8","Type":"ContainerDied","Data":"761529ef6ed61ac7cb5c27c7ccf68e643ec6ade17ebadb64770e00856711a1a6"} Jan 21 13:16:37 crc kubenswrapper[4765]: I0121 13:16:37.956606 4765 generic.go:334] "Generic (PLEG): container finished" podID="d73b65cf-eba0-49dd-81ad-0fb0431092b8" containerID="277e34f3ffde7d33ba226073920f4494c6806155717ededb4b623799d093a070" exitCode=0 Jan 21 13:16:37 crc kubenswrapper[4765]: I0121 13:16:37.957638 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" event={"ID":"d73b65cf-eba0-49dd-81ad-0fb0431092b8","Type":"ContainerDied","Data":"277e34f3ffde7d33ba226073920f4494c6806155717ededb4b623799d093a070"} Jan 21 13:16:39 crc kubenswrapper[4765]: I0121 13:16:39.195507 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" Jan 21 13:16:39 crc kubenswrapper[4765]: I0121 13:16:39.304434 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d73b65cf-eba0-49dd-81ad-0fb0431092b8-bundle\") pod \"d73b65cf-eba0-49dd-81ad-0fb0431092b8\" (UID: \"d73b65cf-eba0-49dd-81ad-0fb0431092b8\") " Jan 21 13:16:39 crc kubenswrapper[4765]: I0121 13:16:39.304757 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq9dv\" (UniqueName: \"kubernetes.io/projected/d73b65cf-eba0-49dd-81ad-0fb0431092b8-kube-api-access-kq9dv\") pod \"d73b65cf-eba0-49dd-81ad-0fb0431092b8\" (UID: \"d73b65cf-eba0-49dd-81ad-0fb0431092b8\") " Jan 21 13:16:39 crc kubenswrapper[4765]: I0121 13:16:39.304936 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d73b65cf-eba0-49dd-81ad-0fb0431092b8-util\") pod \"d73b65cf-eba0-49dd-81ad-0fb0431092b8\" (UID: \"d73b65cf-eba0-49dd-81ad-0fb0431092b8\") " Jan 21 13:16:39 crc kubenswrapper[4765]: I0121 13:16:39.305974 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d73b65cf-eba0-49dd-81ad-0fb0431092b8-bundle" (OuterVolumeSpecName: "bundle") pod "d73b65cf-eba0-49dd-81ad-0fb0431092b8" (UID: "d73b65cf-eba0-49dd-81ad-0fb0431092b8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:16:39 crc kubenswrapper[4765]: I0121 13:16:39.314448 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d73b65cf-eba0-49dd-81ad-0fb0431092b8-kube-api-access-kq9dv" (OuterVolumeSpecName: "kube-api-access-kq9dv") pod "d73b65cf-eba0-49dd-81ad-0fb0431092b8" (UID: "d73b65cf-eba0-49dd-81ad-0fb0431092b8"). InnerVolumeSpecName "kube-api-access-kq9dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:16:39 crc kubenswrapper[4765]: I0121 13:16:39.317271 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d73b65cf-eba0-49dd-81ad-0fb0431092b8-util" (OuterVolumeSpecName: "util") pod "d73b65cf-eba0-49dd-81ad-0fb0431092b8" (UID: "d73b65cf-eba0-49dd-81ad-0fb0431092b8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:16:39 crc kubenswrapper[4765]: I0121 13:16:39.407068 4765 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d73b65cf-eba0-49dd-81ad-0fb0431092b8-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:39 crc kubenswrapper[4765]: I0121 13:16:39.407721 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq9dv\" (UniqueName: \"kubernetes.io/projected/d73b65cf-eba0-49dd-81ad-0fb0431092b8-kube-api-access-kq9dv\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:39 crc kubenswrapper[4765]: I0121 13:16:39.407742 4765 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d73b65cf-eba0-49dd-81ad-0fb0431092b8-util\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:39 crc kubenswrapper[4765]: I0121 13:16:39.973229 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" event={"ID":"d73b65cf-eba0-49dd-81ad-0fb0431092b8","Type":"ContainerDied","Data":"18d883c62e89bc818e50ffcc22189e2253e0712c4b3f4b864cd6322d05dbb5a4"} Jan 21 13:16:39 crc kubenswrapper[4765]: I0121 13:16:39.973294 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18d883c62e89bc818e50ffcc22189e2253e0712c4b3f4b864cd6322d05dbb5a4" Jan 21 13:16:39 crc kubenswrapper[4765]: I0121 13:16:39.973326 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.146469 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8"] Jan 21 13:16:50 crc kubenswrapper[4765]: E0121 13:16:50.147429 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d73b65cf-eba0-49dd-81ad-0fb0431092b8" containerName="extract" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.147449 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d73b65cf-eba0-49dd-81ad-0fb0431092b8" containerName="extract" Jan 21 13:16:50 crc kubenswrapper[4765]: E0121 13:16:50.147468 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be5f3b8-eeae-405b-a836-e806531a57e0" containerName="console" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.147476 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be5f3b8-eeae-405b-a836-e806531a57e0" containerName="console" Jan 21 13:16:50 crc kubenswrapper[4765]: E0121 13:16:50.147486 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d73b65cf-eba0-49dd-81ad-0fb0431092b8" containerName="pull" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.147494 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d73b65cf-eba0-49dd-81ad-0fb0431092b8" containerName="pull" Jan 21 13:16:50 crc kubenswrapper[4765]: E0121 13:16:50.147511 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d73b65cf-eba0-49dd-81ad-0fb0431092b8" containerName="util" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.147518 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d73b65cf-eba0-49dd-81ad-0fb0431092b8" containerName="util" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.147643 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be5f3b8-eeae-405b-a836-e806531a57e0" containerName="console" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.147663 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d73b65cf-eba0-49dd-81ad-0fb0431092b8" containerName="extract" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.148161 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" Jan 21 13:16:50 crc kubenswrapper[4765]: W0121 13:16:50.170871 4765 reflector.go:561] object-"metallb-system"/"metallb-operator-webhook-server-cert": failed to list *v1.Secret: secrets "metallb-operator-webhook-server-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 21 13:16:50 crc kubenswrapper[4765]: W0121 13:16:50.170969 4765 reflector.go:561] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": failed to list *v1.Secret: secrets "metallb-operator-controller-manager-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 21 13:16:50 crc kubenswrapper[4765]: E0121 13:16:50.171027 4765 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-controller-manager-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-operator-controller-manager-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:16:50 crc kubenswrapper[4765]: W0121 13:16:50.171000 4765 reflector.go:561] object-"metallb-system"/"manager-account-dockercfg-4nh7z": failed to list *v1.Secret: secrets "manager-account-dockercfg-4nh7z" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 21 13:16:50 crc kubenswrapper[4765]: E0121 13:16:50.171060 4765 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"manager-account-dockercfg-4nh7z\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"manager-account-dockercfg-4nh7z\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:16:50 crc kubenswrapper[4765]: W0121 13:16:50.171074 4765 reflector.go:561] object-"metallb-system"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 21 13:16:50 crc kubenswrapper[4765]: E0121 13:16:50.171078 4765 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-operator-webhook-server-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:16:50 crc kubenswrapper[4765]: W0121 13:16:50.171074 4765 reflector.go:561] object-"metallb-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Jan 21 13:16:50 crc kubenswrapper[4765]: E0121 13:16:50.171117 4765 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:16:50 crc kubenswrapper[4765]: E0121 13:16:50.171123 4765 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.327549 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57ed60d8-a38f-47ba-b66d-6e7e557b4399-webhook-cert\") pod \"metallb-operator-controller-manager-6c66566bf6-ls8r8\" (UID: \"57ed60d8-a38f-47ba-b66d-6e7e557b4399\") " pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.327648 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2wz9\" (UniqueName: \"kubernetes.io/projected/57ed60d8-a38f-47ba-b66d-6e7e557b4399-kube-api-access-q2wz9\") pod \"metallb-operator-controller-manager-6c66566bf6-ls8r8\" (UID: \"57ed60d8-a38f-47ba-b66d-6e7e557b4399\") " pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.328064 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57ed60d8-a38f-47ba-b66d-6e7e557b4399-apiservice-cert\") pod \"metallb-operator-controller-manager-6c66566bf6-ls8r8\" (UID: \"57ed60d8-a38f-47ba-b66d-6e7e557b4399\") " pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.429638 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57ed60d8-a38f-47ba-b66d-6e7e557b4399-apiservice-cert\") pod \"metallb-operator-controller-manager-6c66566bf6-ls8r8\" (UID: \"57ed60d8-a38f-47ba-b66d-6e7e557b4399\") " pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.429720 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57ed60d8-a38f-47ba-b66d-6e7e557b4399-webhook-cert\") pod \"metallb-operator-controller-manager-6c66566bf6-ls8r8\" (UID: \"57ed60d8-a38f-47ba-b66d-6e7e557b4399\") " pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.429757 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2wz9\" (UniqueName: \"kubernetes.io/projected/57ed60d8-a38f-47ba-b66d-6e7e557b4399-kube-api-access-q2wz9\") pod \"metallb-operator-controller-manager-6c66566bf6-ls8r8\" (UID: \"57ed60d8-a38f-47ba-b66d-6e7e557b4399\") " pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.435240 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8"] Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.591594 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c"] Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.596768 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.601331 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.604827 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.604939 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-jntxf" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.616136 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c"] Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.734169 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c9rz\" (UniqueName: \"kubernetes.io/projected/7ba871a2-babc-4cc6-a13b-4fa78e3d0580-kube-api-access-2c9rz\") pod \"metallb-operator-webhook-server-77844fbdcc-cgv2c\" (UID: \"7ba871a2-babc-4cc6-a13b-4fa78e3d0580\") " pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.734253 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ba871a2-babc-4cc6-a13b-4fa78e3d0580-webhook-cert\") pod \"metallb-operator-webhook-server-77844fbdcc-cgv2c\" (UID: \"7ba871a2-babc-4cc6-a13b-4fa78e3d0580\") " pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.734665 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ba871a2-babc-4cc6-a13b-4fa78e3d0580-apiservice-cert\") pod \"metallb-operator-webhook-server-77844fbdcc-cgv2c\" (UID: \"7ba871a2-babc-4cc6-a13b-4fa78e3d0580\") " pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.835884 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ba871a2-babc-4cc6-a13b-4fa78e3d0580-apiservice-cert\") pod \"metallb-operator-webhook-server-77844fbdcc-cgv2c\" (UID: \"7ba871a2-babc-4cc6-a13b-4fa78e3d0580\") " pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.835974 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c9rz\" (UniqueName: \"kubernetes.io/projected/7ba871a2-babc-4cc6-a13b-4fa78e3d0580-kube-api-access-2c9rz\") pod \"metallb-operator-webhook-server-77844fbdcc-cgv2c\" (UID: \"7ba871a2-babc-4cc6-a13b-4fa78e3d0580\") " pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.836020 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ba871a2-babc-4cc6-a13b-4fa78e3d0580-webhook-cert\") pod \"metallb-operator-webhook-server-77844fbdcc-cgv2c\" (UID: \"7ba871a2-babc-4cc6-a13b-4fa78e3d0580\") " pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.842455 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ba871a2-babc-4cc6-a13b-4fa78e3d0580-webhook-cert\") pod \"metallb-operator-webhook-server-77844fbdcc-cgv2c\" (UID: \"7ba871a2-babc-4cc6-a13b-4fa78e3d0580\") " pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.856610 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ba871a2-babc-4cc6-a13b-4fa78e3d0580-apiservice-cert\") pod \"metallb-operator-webhook-server-77844fbdcc-cgv2c\" (UID: \"7ba871a2-babc-4cc6-a13b-4fa78e3d0580\") " pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" Jan 21 13:16:50 crc kubenswrapper[4765]: I0121 13:16:50.985190 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 13:16:51 crc kubenswrapper[4765]: I0121 13:16:51.012422 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-4nh7z" Jan 21 13:16:51 crc kubenswrapper[4765]: I0121 13:16:51.223719 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 13:16:51 crc kubenswrapper[4765]: I0121 13:16:51.383877 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 13:16:51 crc kubenswrapper[4765]: I0121 13:16:51.399247 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c9rz\" (UniqueName: \"kubernetes.io/projected/7ba871a2-babc-4cc6-a13b-4fa78e3d0580-kube-api-access-2c9rz\") pod \"metallb-operator-webhook-server-77844fbdcc-cgv2c\" (UID: \"7ba871a2-babc-4cc6-a13b-4fa78e3d0580\") " pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" Jan 21 13:16:51 crc kubenswrapper[4765]: I0121 13:16:51.413411 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2wz9\" (UniqueName: \"kubernetes.io/projected/57ed60d8-a38f-47ba-b66d-6e7e557b4399-kube-api-access-q2wz9\") pod \"metallb-operator-controller-manager-6c66566bf6-ls8r8\" (UID: \"57ed60d8-a38f-47ba-b66d-6e7e557b4399\") " pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" Jan 21 13:16:51 crc kubenswrapper[4765]: E0121 13:16:51.431266 4765 secret.go:188] Couldn't get secret metallb-system/metallb-operator-controller-manager-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 13:16:51 crc kubenswrapper[4765]: E0121 13:16:51.431319 4765 secret.go:188] Couldn't get secret metallb-system/metallb-operator-controller-manager-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 13:16:51 crc kubenswrapper[4765]: E0121 13:16:51.431381 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57ed60d8-a38f-47ba-b66d-6e7e557b4399-webhook-cert podName:57ed60d8-a38f-47ba-b66d-6e7e557b4399 nodeName:}" failed. No retries permitted until 2026-01-21 13:16:51.931356769 +0000 UTC m=+872.949082591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/57ed60d8-a38f-47ba-b66d-6e7e557b4399-webhook-cert") pod "metallb-operator-controller-manager-6c66566bf6-ls8r8" (UID: "57ed60d8-a38f-47ba-b66d-6e7e557b4399") : failed to sync secret cache: timed out waiting for the condition Jan 21 13:16:51 crc kubenswrapper[4765]: E0121 13:16:51.431426 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57ed60d8-a38f-47ba-b66d-6e7e557b4399-apiservice-cert podName:57ed60d8-a38f-47ba-b66d-6e7e557b4399 nodeName:}" failed. No retries permitted until 2026-01-21 13:16:51.93140029 +0000 UTC m=+872.949126112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/57ed60d8-a38f-47ba-b66d-6e7e557b4399-apiservice-cert") pod "metallb-operator-controller-manager-6c66566bf6-ls8r8" (UID: "57ed60d8-a38f-47ba-b66d-6e7e557b4399") : failed to sync secret cache: timed out waiting for the condition Jan 21 13:16:51 crc kubenswrapper[4765]: I0121 13:16:51.520300 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" Jan 21 13:16:51 crc kubenswrapper[4765]: I0121 13:16:51.735593 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 13:16:51 crc kubenswrapper[4765]: I0121 13:16:51.955331 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c"] Jan 21 13:16:51 crc kubenswrapper[4765]: I0121 13:16:51.956121 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57ed60d8-a38f-47ba-b66d-6e7e557b4399-apiservice-cert\") pod \"metallb-operator-controller-manager-6c66566bf6-ls8r8\" (UID: \"57ed60d8-a38f-47ba-b66d-6e7e557b4399\") " pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" Jan 21 13:16:51 crc kubenswrapper[4765]: I0121 13:16:51.956186 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57ed60d8-a38f-47ba-b66d-6e7e557b4399-webhook-cert\") pod \"metallb-operator-controller-manager-6c66566bf6-ls8r8\" (UID: \"57ed60d8-a38f-47ba-b66d-6e7e557b4399\") " pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" Jan 21 13:16:51 crc kubenswrapper[4765]: I0121 13:16:51.960576 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57ed60d8-a38f-47ba-b66d-6e7e557b4399-apiservice-cert\") pod \"metallb-operator-controller-manager-6c66566bf6-ls8r8\" (UID: \"57ed60d8-a38f-47ba-b66d-6e7e557b4399\") " pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" Jan 21 13:16:51 crc kubenswrapper[4765]: I0121 13:16:51.974407 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57ed60d8-a38f-47ba-b66d-6e7e557b4399-webhook-cert\") pod \"metallb-operator-controller-manager-6c66566bf6-ls8r8\" (UID: \"57ed60d8-a38f-47ba-b66d-6e7e557b4399\") " pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" Jan 21 13:16:52 crc kubenswrapper[4765]: I0121 13:16:52.054508 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" event={"ID":"7ba871a2-babc-4cc6-a13b-4fa78e3d0580","Type":"ContainerStarted","Data":"7cfa1d89ce26fbe8432c6143f0e285946d22ebb3e780233b89c870312dbcd6ec"} Jan 21 13:16:52 crc kubenswrapper[4765]: I0121 13:16:52.264933 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" Jan 21 13:16:52 crc kubenswrapper[4765]: I0121 13:16:52.854495 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8"] Jan 21 13:16:52 crc kubenswrapper[4765]: W0121 13:16:52.859803 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57ed60d8_a38f_47ba_b66d_6e7e557b4399.slice/crio-a82ba29f20ced55e70970288b40d86df857d4fdb54274dd4cdcb168e8ffcff01 WatchSource:0}: Error finding container a82ba29f20ced55e70970288b40d86df857d4fdb54274dd4cdcb168e8ffcff01: Status 404 returned error can't find the container with id a82ba29f20ced55e70970288b40d86df857d4fdb54274dd4cdcb168e8ffcff01 Jan 21 13:16:53 crc kubenswrapper[4765]: I0121 13:16:53.062715 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" event={"ID":"57ed60d8-a38f-47ba-b66d-6e7e557b4399","Type":"ContainerStarted","Data":"a82ba29f20ced55e70970288b40d86df857d4fdb54274dd4cdcb168e8ffcff01"} Jan 21 13:17:01 crc kubenswrapper[4765]: I0121 13:17:01.124992 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" event={"ID":"7ba871a2-babc-4cc6-a13b-4fa78e3d0580","Type":"ContainerStarted","Data":"db1bf7e94f6de0b61fa07b56ac0274375695bb7248f9e6c7d609e13fc91e18e8"} Jan 21 13:17:01 crc kubenswrapper[4765]: I0121 13:17:01.125728 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" Jan 21 13:17:01 crc kubenswrapper[4765]: I0121 13:17:01.128567 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" event={"ID":"57ed60d8-a38f-47ba-b66d-6e7e557b4399","Type":"ContainerStarted","Data":"90db4fe338fe6e731d85e1ac4664c9206b4ac8a53e00c9dc519c049829198fb8"} Jan 21 13:17:01 crc kubenswrapper[4765]: I0121 13:17:01.128770 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" Jan 21 13:17:01 crc kubenswrapper[4765]: I0121 13:17:01.158651 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" podStartSLOduration=2.425697894 podStartE2EDuration="11.158621568s" podCreationTimestamp="2026-01-21 13:16:50 +0000 UTC" firstStartedPulling="2026-01-21 13:16:51.97014971 +0000 UTC m=+872.987875532" lastFinishedPulling="2026-01-21 13:17:00.703073384 +0000 UTC m=+881.720799206" observedRunningTime="2026-01-21 13:17:01.155136305 +0000 UTC m=+882.172862127" watchObservedRunningTime="2026-01-21 13:17:01.158621568 +0000 UTC m=+882.176347390" Jan 21 13:17:11 crc kubenswrapper[4765]: I0121 13:17:11.526523 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-77844fbdcc-cgv2c" Jan 21 13:17:11 crc kubenswrapper[4765]: I0121 13:17:11.548566 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" podStartSLOduration=13.739522643 podStartE2EDuration="21.54854082s" podCreationTimestamp="2026-01-21 13:16:50 +0000 UTC" firstStartedPulling="2026-01-21 13:16:52.863638316 +0000 UTC m=+873.881364148" lastFinishedPulling="2026-01-21 13:17:00.672656503 +0000 UTC m=+881.690382325" observedRunningTime="2026-01-21 13:17:01.199083338 +0000 UTC m=+882.216809160" watchObservedRunningTime="2026-01-21 13:17:11.54854082 +0000 UTC m=+892.566266642" Jan 21 13:17:32 crc kubenswrapper[4765]: I0121 13:17:32.269357 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6c66566bf6-ls8r8" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.127121 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh"] Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.127947 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.137172 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-h7gxb" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.137773 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.143054 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-zcjrs"] Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.147247 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh"] Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.147439 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.154496 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.154657 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.159700 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-frr-sockets\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.159792 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-reloader\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.159842 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbrmg\" (UniqueName: \"kubernetes.io/projected/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-kube-api-access-bbrmg\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.159921 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-metrics\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.159986 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggg4z\" (UniqueName: \"kubernetes.io/projected/af902f5f-216b-41c7-b1e9-56953151dd65-kube-api-access-ggg4z\") pod \"frr-k8s-webhook-server-7df86c4f6c-qlhwh\" (UID: \"af902f5f-216b-41c7-b1e9-56953151dd65\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.160053 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-metrics-certs\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.160087 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-frr-startup\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.160105 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af902f5f-216b-41c7-b1e9-56953151dd65-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qlhwh\" (UID: \"af902f5f-216b-41c7-b1e9-56953151dd65\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.160203 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-frr-conf\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.261829 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbrmg\" (UniqueName: \"kubernetes.io/projected/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-kube-api-access-bbrmg\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.261933 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-metrics\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.261983 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggg4z\" (UniqueName: \"kubernetes.io/projected/af902f5f-216b-41c7-b1e9-56953151dd65-kube-api-access-ggg4z\") pod \"frr-k8s-webhook-server-7df86c4f6c-qlhwh\" (UID: \"af902f5f-216b-41c7-b1e9-56953151dd65\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.262029 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-metrics-certs\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.262057 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-frr-startup\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.262075 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af902f5f-216b-41c7-b1e9-56953151dd65-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qlhwh\" (UID: \"af902f5f-216b-41c7-b1e9-56953151dd65\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.262131 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-frr-conf\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.262176 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-frr-sockets\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.262224 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-reloader\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: E0121 13:17:33.262412 4765 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 21 13:17:33 crc kubenswrapper[4765]: E0121 13:17:33.262532 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af902f5f-216b-41c7-b1e9-56953151dd65-cert podName:af902f5f-216b-41c7-b1e9-56953151dd65 nodeName:}" failed. No retries permitted until 2026-01-21 13:17:33.76249834 +0000 UTC m=+914.780224162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af902f5f-216b-41c7-b1e9-56953151dd65-cert") pod "frr-k8s-webhook-server-7df86c4f6c-qlhwh" (UID: "af902f5f-216b-41c7-b1e9-56953151dd65") : secret "frr-k8s-webhook-server-cert" not found Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.262868 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-metrics\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.262872 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-frr-conf\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.263039 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-frr-sockets\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.263252 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-frr-startup\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.263961 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-reloader\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.276738 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-vswxq"] Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.278899 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vswxq" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.279404 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-skh9c"] Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.281064 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-skh9c" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.284361 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.284715 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.284983 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.285280 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-srdpm" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.285527 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.286326 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-metrics-certs\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.304812 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-skh9c"] Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.306972 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggg4z\" (UniqueName: \"kubernetes.io/projected/af902f5f-216b-41c7-b1e9-56953151dd65-kube-api-access-ggg4z\") pod \"frr-k8s-webhook-server-7df86c4f6c-qlhwh\" (UID: \"af902f5f-216b-41c7-b1e9-56953151dd65\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.314139 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbrmg\" (UniqueName: \"kubernetes.io/projected/9120122b-7a7d-4bb6-bf58-29b63c9e20bf-kube-api-access-bbrmg\") pod \"frr-k8s-zcjrs\" (UID: \"9120122b-7a7d-4bb6-bf58-29b63c9e20bf\") " pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.465099 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6777\" (UniqueName: \"kubernetes.io/projected/f05e7811-d30d-4f00-b816-a740a454c635-kube-api-access-q6777\") pod \"controller-6968d8fdc4-skh9c\" (UID: \"f05e7811-d30d-4f00-b816-a740a454c635\") " pod="metallb-system/controller-6968d8fdc4-skh9c" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.465190 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-memberlist\") pod \"speaker-vswxq\" (UID: \"8f59aeb8-b8fe-44bc-9e55-94eba06a676b\") " pod="metallb-system/speaker-vswxq" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.465338 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbswj\" (UniqueName: \"kubernetes.io/projected/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-kube-api-access-sbswj\") pod \"speaker-vswxq\" (UID: \"8f59aeb8-b8fe-44bc-9e55-94eba06a676b\") " pod="metallb-system/speaker-vswxq" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.465390 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f05e7811-d30d-4f00-b816-a740a454c635-metrics-certs\") pod \"controller-6968d8fdc4-skh9c\" (UID: \"f05e7811-d30d-4f00-b816-a740a454c635\") " pod="metallb-system/controller-6968d8fdc4-skh9c" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.465418 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-metallb-excludel2\") pod \"speaker-vswxq\" (UID: \"8f59aeb8-b8fe-44bc-9e55-94eba06a676b\") " pod="metallb-system/speaker-vswxq" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.465538 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-metrics-certs\") pod \"speaker-vswxq\" (UID: \"8f59aeb8-b8fe-44bc-9e55-94eba06a676b\") " pod="metallb-system/speaker-vswxq" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.465608 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f05e7811-d30d-4f00-b816-a740a454c635-cert\") pod \"controller-6968d8fdc4-skh9c\" (UID: \"f05e7811-d30d-4f00-b816-a740a454c635\") " pod="metallb-system/controller-6968d8fdc4-skh9c" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.469749 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.566890 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f05e7811-d30d-4f00-b816-a740a454c635-metrics-certs\") pod \"controller-6968d8fdc4-skh9c\" (UID: \"f05e7811-d30d-4f00-b816-a740a454c635\") " pod="metallb-system/controller-6968d8fdc4-skh9c" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.566975 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-metallb-excludel2\") pod \"speaker-vswxq\" (UID: \"8f59aeb8-b8fe-44bc-9e55-94eba06a676b\") " pod="metallb-system/speaker-vswxq" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.567012 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-metrics-certs\") pod \"speaker-vswxq\" (UID: \"8f59aeb8-b8fe-44bc-9e55-94eba06a676b\") " pod="metallb-system/speaker-vswxq" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.567047 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f05e7811-d30d-4f00-b816-a740a454c635-cert\") pod \"controller-6968d8fdc4-skh9c\" (UID: \"f05e7811-d30d-4f00-b816-a740a454c635\") " pod="metallb-system/controller-6968d8fdc4-skh9c" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.567073 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6777\" (UniqueName: \"kubernetes.io/projected/f05e7811-d30d-4f00-b816-a740a454c635-kube-api-access-q6777\") pod \"controller-6968d8fdc4-skh9c\" (UID: \"f05e7811-d30d-4f00-b816-a740a454c635\") " pod="metallb-system/controller-6968d8fdc4-skh9c" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.567107 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-memberlist\") pod \"speaker-vswxq\" (UID: \"8f59aeb8-b8fe-44bc-9e55-94eba06a676b\") " pod="metallb-system/speaker-vswxq" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.567175 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbswj\" (UniqueName: \"kubernetes.io/projected/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-kube-api-access-sbswj\") pod \"speaker-vswxq\" (UID: \"8f59aeb8-b8fe-44bc-9e55-94eba06a676b\") " pod="metallb-system/speaker-vswxq" Jan 21 13:17:33 crc kubenswrapper[4765]: E0121 13:17:33.567305 4765 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 13:17:33 crc kubenswrapper[4765]: E0121 13:17:33.567433 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-memberlist podName:8f59aeb8-b8fe-44bc-9e55-94eba06a676b nodeName:}" failed. No retries permitted until 2026-01-21 13:17:34.067403596 +0000 UTC m=+915.085129608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-memberlist") pod "speaker-vswxq" (UID: "8f59aeb8-b8fe-44bc-9e55-94eba06a676b") : secret "metallb-memberlist" not found Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.567957 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-metallb-excludel2\") pod \"speaker-vswxq\" (UID: \"8f59aeb8-b8fe-44bc-9e55-94eba06a676b\") " pod="metallb-system/speaker-vswxq" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.572902 4765 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.573182 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-metrics-certs\") pod \"speaker-vswxq\" (UID: \"8f59aeb8-b8fe-44bc-9e55-94eba06a676b\") " pod="metallb-system/speaker-vswxq" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.573939 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f05e7811-d30d-4f00-b816-a740a454c635-metrics-certs\") pod \"controller-6968d8fdc4-skh9c\" (UID: \"f05e7811-d30d-4f00-b816-a740a454c635\") " pod="metallb-system/controller-6968d8fdc4-skh9c" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.581734 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f05e7811-d30d-4f00-b816-a740a454c635-cert\") pod \"controller-6968d8fdc4-skh9c\" (UID: \"f05e7811-d30d-4f00-b816-a740a454c635\") " pod="metallb-system/controller-6968d8fdc4-skh9c" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.592630 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbswj\" (UniqueName: \"kubernetes.io/projected/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-kube-api-access-sbswj\") pod \"speaker-vswxq\" (UID: \"8f59aeb8-b8fe-44bc-9e55-94eba06a676b\") " pod="metallb-system/speaker-vswxq" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.593053 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6777\" (UniqueName: \"kubernetes.io/projected/f05e7811-d30d-4f00-b816-a740a454c635-kube-api-access-q6777\") pod \"controller-6968d8fdc4-skh9c\" (UID: \"f05e7811-d30d-4f00-b816-a740a454c635\") " pod="metallb-system/controller-6968d8fdc4-skh9c" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.656875 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-skh9c" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.769847 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af902f5f-216b-41c7-b1e9-56953151dd65-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qlhwh\" (UID: \"af902f5f-216b-41c7-b1e9-56953151dd65\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh" Jan 21 13:17:33 crc kubenswrapper[4765]: I0121 13:17:33.779164 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af902f5f-216b-41c7-b1e9-56953151dd65-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qlhwh\" (UID: \"af902f5f-216b-41c7-b1e9-56953151dd65\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh" Jan 21 13:17:34 crc kubenswrapper[4765]: I0121 13:17:34.021307 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-skh9c"] Jan 21 13:17:34 crc kubenswrapper[4765]: I0121 13:17:34.046869 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh" Jan 21 13:17:34 crc kubenswrapper[4765]: I0121 13:17:34.089169 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-memberlist\") pod \"speaker-vswxq\" (UID: \"8f59aeb8-b8fe-44bc-9e55-94eba06a676b\") " pod="metallb-system/speaker-vswxq" Jan 21 13:17:34 crc kubenswrapper[4765]: E0121 13:17:34.096292 4765 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 13:17:34 crc kubenswrapper[4765]: E0121 13:17:34.096428 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-memberlist podName:8f59aeb8-b8fe-44bc-9e55-94eba06a676b nodeName:}" failed. No retries permitted until 2026-01-21 13:17:35.096402062 +0000 UTC m=+916.114127884 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-memberlist") pod "speaker-vswxq" (UID: "8f59aeb8-b8fe-44bc-9e55-94eba06a676b") : secret "metallb-memberlist" not found Jan 21 13:17:34 crc kubenswrapper[4765]: I0121 13:17:34.357533 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zcjrs" event={"ID":"9120122b-7a7d-4bb6-bf58-29b63c9e20bf","Type":"ContainerStarted","Data":"e29b98e6b2c35db09bdea750ef4b5d1ce825ef3f76189b463842e39a0e3f6215"} Jan 21 13:17:34 crc kubenswrapper[4765]: I0121 13:17:34.359021 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-skh9c" event={"ID":"f05e7811-d30d-4f00-b816-a740a454c635","Type":"ContainerStarted","Data":"ec0f7b0020040402e6f1bee2df9f018704147b5a3d1a7c4eac5d8f57b918a49f"} Jan 21 13:17:34 crc kubenswrapper[4765]: I0121 13:17:34.359060 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-skh9c" event={"ID":"f05e7811-d30d-4f00-b816-a740a454c635","Type":"ContainerStarted","Data":"06579bae606c97a4d55c3503f92e241bc7e24e8b8fcc31bf01a8e9074b7ceed8"} Jan 21 13:17:34 crc kubenswrapper[4765]: I0121 13:17:34.558779 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh"] Jan 21 13:17:35 crc kubenswrapper[4765]: I0121 13:17:35.116607 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-memberlist\") pod \"speaker-vswxq\" (UID: \"8f59aeb8-b8fe-44bc-9e55-94eba06a676b\") " pod="metallb-system/speaker-vswxq" Jan 21 13:17:35 crc kubenswrapper[4765]: I0121 13:17:35.122984 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f59aeb8-b8fe-44bc-9e55-94eba06a676b-memberlist\") pod \"speaker-vswxq\" (UID: \"8f59aeb8-b8fe-44bc-9e55-94eba06a676b\") " pod="metallb-system/speaker-vswxq" Jan 21 13:17:35 crc kubenswrapper[4765]: I0121 13:17:35.144679 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vswxq" Jan 21 13:17:35 crc kubenswrapper[4765]: W0121 13:17:35.168928 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f59aeb8_b8fe_44bc_9e55_94eba06a676b.slice/crio-907be80c196b99dfdc6f765a72a2b32ece19f6c12ad06ac013e89b9118cccc5c WatchSource:0}: Error finding container 907be80c196b99dfdc6f765a72a2b32ece19f6c12ad06ac013e89b9118cccc5c: Status 404 returned error can't find the container with id 907be80c196b99dfdc6f765a72a2b32ece19f6c12ad06ac013e89b9118cccc5c Jan 21 13:17:35 crc kubenswrapper[4765]: I0121 13:17:35.368908 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh" event={"ID":"af902f5f-216b-41c7-b1e9-56953151dd65","Type":"ContainerStarted","Data":"642bf0c385ad2f7627529a695ff00f3f3045a9043f6ef9cad7155ed4da752124"} Jan 21 13:17:35 crc kubenswrapper[4765]: I0121 13:17:35.372005 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-skh9c" event={"ID":"f05e7811-d30d-4f00-b816-a740a454c635","Type":"ContainerStarted","Data":"b5fac068f8ec19d9e473cdd72ab48f3c65d164ca15fc51ae607f1356dafc7357"} Jan 21 13:17:35 crc kubenswrapper[4765]: I0121 13:17:35.372181 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-skh9c" Jan 21 13:17:35 crc kubenswrapper[4765]: I0121 13:17:35.373459 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vswxq" event={"ID":"8f59aeb8-b8fe-44bc-9e55-94eba06a676b","Type":"ContainerStarted","Data":"907be80c196b99dfdc6f765a72a2b32ece19f6c12ad06ac013e89b9118cccc5c"} Jan 21 13:17:35 crc kubenswrapper[4765]: I0121 13:17:35.401400 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-skh9c" podStartSLOduration=2.401324959 podStartE2EDuration="2.401324959s" podCreationTimestamp="2026-01-21 13:17:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:17:35.391865548 +0000 UTC m=+916.409591390" watchObservedRunningTime="2026-01-21 13:17:35.401324959 +0000 UTC m=+916.419050781" Jan 21 13:17:36 crc kubenswrapper[4765]: I0121 13:17:36.392922 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vswxq" event={"ID":"8f59aeb8-b8fe-44bc-9e55-94eba06a676b","Type":"ContainerStarted","Data":"df361751d8230fbdab5462d2a1a43ed87de9518d0e0323d28adc9fba87b6ebb2"} Jan 21 13:17:37 crc kubenswrapper[4765]: I0121 13:17:37.406458 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vswxq" event={"ID":"8f59aeb8-b8fe-44bc-9e55-94eba06a676b","Type":"ContainerStarted","Data":"e9a0811966c01f6f971cd60ea7f762f8c083db7337a84fba403ef64656bef05a"} Jan 21 13:17:37 crc kubenswrapper[4765]: I0121 13:17:37.406841 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-vswxq" Jan 21 13:17:37 crc kubenswrapper[4765]: I0121 13:17:37.436858 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-vswxq" podStartSLOduration=4.43682065 podStartE2EDuration="4.43682065s" podCreationTimestamp="2026-01-21 13:17:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:17:37.427381709 +0000 UTC m=+918.445107531" watchObservedRunningTime="2026-01-21 13:17:37.43682065 +0000 UTC m=+918.454546472" Jan 21 13:17:44 crc kubenswrapper[4765]: I0121 13:17:44.474233 4765 generic.go:334] "Generic (PLEG): container finished" podID="9120122b-7a7d-4bb6-bf58-29b63c9e20bf" containerID="ed29b3d076a412e4130bc63856848a65aebb6f2bf0178593049270fa65ed39e3" exitCode=0 Jan 21 13:17:44 crc kubenswrapper[4765]: I0121 13:17:44.474316 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zcjrs" event={"ID":"9120122b-7a7d-4bb6-bf58-29b63c9e20bf","Type":"ContainerDied","Data":"ed29b3d076a412e4130bc63856848a65aebb6f2bf0178593049270fa65ed39e3"} Jan 21 13:17:44 crc kubenswrapper[4765]: I0121 13:17:44.481300 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh" event={"ID":"af902f5f-216b-41c7-b1e9-56953151dd65","Type":"ContainerStarted","Data":"38551c4457faa7b752a226bdc7fa40508186ff41a4e75ef74d40a55be4693738"} Jan 21 13:17:44 crc kubenswrapper[4765]: I0121 13:17:44.485312 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh" Jan 21 13:17:44 crc kubenswrapper[4765]: I0121 13:17:44.553370 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh" podStartSLOduration=2.492625266 podStartE2EDuration="11.553352557s" podCreationTimestamp="2026-01-21 13:17:33 +0000 UTC" firstStartedPulling="2026-01-21 13:17:34.572118662 +0000 UTC m=+915.589844484" lastFinishedPulling="2026-01-21 13:17:43.632845953 +0000 UTC m=+924.650571775" observedRunningTime="2026-01-21 13:17:44.552923345 +0000 UTC m=+925.570649167" watchObservedRunningTime="2026-01-21 13:17:44.553352557 +0000 UTC m=+925.571078379" Jan 21 13:17:45 crc kubenswrapper[4765]: I0121 13:17:45.149906 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-vswxq" Jan 21 13:17:45 crc kubenswrapper[4765]: I0121 13:17:45.491490 4765 generic.go:334] "Generic (PLEG): container finished" podID="9120122b-7a7d-4bb6-bf58-29b63c9e20bf" containerID="1daf8c3a46cea0e51844be674dfe926ac14e8caebd2d67721b944d187c575041" exitCode=0 Jan 21 13:17:45 crc kubenswrapper[4765]: I0121 13:17:45.491649 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zcjrs" event={"ID":"9120122b-7a7d-4bb6-bf58-29b63c9e20bf","Type":"ContainerDied","Data":"1daf8c3a46cea0e51844be674dfe926ac14e8caebd2d67721b944d187c575041"} Jan 21 13:17:46 crc kubenswrapper[4765]: I0121 13:17:46.500455 4765 generic.go:334] "Generic (PLEG): container finished" podID="9120122b-7a7d-4bb6-bf58-29b63c9e20bf" containerID="15eb807c475a01f29af7225d3b49ef5269a8c49ccdc1b1517ba8fba2d062cd0c" exitCode=0 Jan 21 13:17:46 crc kubenswrapper[4765]: I0121 13:17:46.500572 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zcjrs" event={"ID":"9120122b-7a7d-4bb6-bf58-29b63c9e20bf","Type":"ContainerDied","Data":"15eb807c475a01f29af7225d3b49ef5269a8c49ccdc1b1517ba8fba2d062cd0c"} Jan 21 13:17:47 crc kubenswrapper[4765]: I0121 13:17:47.518994 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zcjrs" event={"ID":"9120122b-7a7d-4bb6-bf58-29b63c9e20bf","Type":"ContainerStarted","Data":"2c51bb3ad4b25304ae3de5c7c9b425df9333ffc7f89ee8a1472ac3ba17966f4a"} Jan 21 13:17:47 crc kubenswrapper[4765]: I0121 13:17:47.519569 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zcjrs" event={"ID":"9120122b-7a7d-4bb6-bf58-29b63c9e20bf","Type":"ContainerStarted","Data":"7ca27f05e2c9386e8a58ca9570b3801a64587b4b7cda0889057d3bfb0696114d"} Jan 21 13:17:47 crc kubenswrapper[4765]: I0121 13:17:47.519584 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zcjrs" event={"ID":"9120122b-7a7d-4bb6-bf58-29b63c9e20bf","Type":"ContainerStarted","Data":"d6ee8fabf77200abc10dec84d227cfb09b11316964961e071bc3ad90df0740f9"} Jan 21 13:17:47 crc kubenswrapper[4765]: I0121 13:17:47.519595 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zcjrs" event={"ID":"9120122b-7a7d-4bb6-bf58-29b63c9e20bf","Type":"ContainerStarted","Data":"8692b9450e89cf9a73c843c0f8288f651973084468fe26a4d09f434e685ca6aa"} Jan 21 13:17:47 crc kubenswrapper[4765]: I0121 13:17:47.519607 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zcjrs" event={"ID":"9120122b-7a7d-4bb6-bf58-29b63c9e20bf","Type":"ContainerStarted","Data":"28dd8c04b52368b6f789bd51814c31044287061497eec12284cf0d97198fb33d"} Jan 21 13:17:48 crc kubenswrapper[4765]: I0121 13:17:48.477870 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-dlnr6"] Jan 21 13:17:48 crc kubenswrapper[4765]: I0121 13:17:48.479451 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dlnr6" Jan 21 13:17:48 crc kubenswrapper[4765]: I0121 13:17:48.482727 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-862nm" Jan 21 13:17:48 crc kubenswrapper[4765]: I0121 13:17:48.482957 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 13:17:48 crc kubenswrapper[4765]: I0121 13:17:48.483545 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 13:17:48 crc kubenswrapper[4765]: I0121 13:17:48.532840 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zcjrs" event={"ID":"9120122b-7a7d-4bb6-bf58-29b63c9e20bf","Type":"ContainerStarted","Data":"1c3bd1ac36f75445d237fa71581310aae191a69790bedffce1a07f9c5e63b36d"} Jan 21 13:17:48 crc kubenswrapper[4765]: I0121 13:17:48.533076 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:48 crc kubenswrapper[4765]: I0121 13:17:48.556615 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dlnr6"] Jan 21 13:17:48 crc kubenswrapper[4765]: I0121 13:17:48.566278 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-zcjrs" podStartSLOduration=5.670769434 podStartE2EDuration="15.566255931s" podCreationTimestamp="2026-01-21 13:17:33 +0000 UTC" firstStartedPulling="2026-01-21 13:17:33.713922594 +0000 UTC m=+914.731648416" lastFinishedPulling="2026-01-21 13:17:43.609409081 +0000 UTC m=+924.627134913" observedRunningTime="2026-01-21 13:17:48.562843334 +0000 UTC m=+929.580569156" watchObservedRunningTime="2026-01-21 13:17:48.566255931 +0000 UTC m=+929.583981753" Jan 21 13:17:48 crc kubenswrapper[4765]: I0121 13:17:48.582374 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px54x\" (UniqueName: \"kubernetes.io/projected/5447dc31-f07a-42e4-93ca-140afbcbb3fb-kube-api-access-px54x\") pod \"openstack-operator-index-dlnr6\" (UID: \"5447dc31-f07a-42e4-93ca-140afbcbb3fb\") " pod="openstack-operators/openstack-operator-index-dlnr6" Jan 21 13:17:48 crc kubenswrapper[4765]: I0121 13:17:48.684071 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px54x\" (UniqueName: \"kubernetes.io/projected/5447dc31-f07a-42e4-93ca-140afbcbb3fb-kube-api-access-px54x\") pod \"openstack-operator-index-dlnr6\" (UID: \"5447dc31-f07a-42e4-93ca-140afbcbb3fb\") " pod="openstack-operators/openstack-operator-index-dlnr6" Jan 21 13:17:48 crc kubenswrapper[4765]: I0121 13:17:48.711695 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px54x\" (UniqueName: \"kubernetes.io/projected/5447dc31-f07a-42e4-93ca-140afbcbb3fb-kube-api-access-px54x\") pod \"openstack-operator-index-dlnr6\" (UID: \"5447dc31-f07a-42e4-93ca-140afbcbb3fb\") " pod="openstack-operators/openstack-operator-index-dlnr6" Jan 21 13:17:48 crc kubenswrapper[4765]: I0121 13:17:48.800849 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dlnr6" Jan 21 13:17:49 crc kubenswrapper[4765]: I0121 13:17:49.307729 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dlnr6"] Jan 21 13:17:49 crc kubenswrapper[4765]: I0121 13:17:49.547396 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dlnr6" event={"ID":"5447dc31-f07a-42e4-93ca-140afbcbb3fb","Type":"ContainerStarted","Data":"2c08250c183e226fbc327e8182a5e387c5095d42d87440cdbb912a93bf2dde58"} Jan 21 13:17:51 crc kubenswrapper[4765]: I0121 13:17:51.787808 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-dlnr6"] Jan 21 13:17:52 crc kubenswrapper[4765]: I0121 13:17:52.395952 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-p9ml4"] Jan 21 13:17:52 crc kubenswrapper[4765]: I0121 13:17:52.397059 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-p9ml4" Jan 21 13:17:52 crc kubenswrapper[4765]: I0121 13:17:52.411859 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-p9ml4"] Jan 21 13:17:52 crc kubenswrapper[4765]: I0121 13:17:52.477392 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j7qn\" (UniqueName: \"kubernetes.io/projected/d35e26b9-ec61-4be2-b6f6-f40544f4094f-kube-api-access-5j7qn\") pod \"openstack-operator-index-p9ml4\" (UID: \"d35e26b9-ec61-4be2-b6f6-f40544f4094f\") " pod="openstack-operators/openstack-operator-index-p9ml4" Jan 21 13:17:52 crc kubenswrapper[4765]: I0121 13:17:52.570227 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dlnr6" event={"ID":"5447dc31-f07a-42e4-93ca-140afbcbb3fb","Type":"ContainerStarted","Data":"717d9589145339c5f9927367811718d672fce458963a94d47c240f802bf28243"} Jan 21 13:17:52 crc kubenswrapper[4765]: I0121 13:17:52.570422 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-dlnr6" podUID="5447dc31-f07a-42e4-93ca-140afbcbb3fb" containerName="registry-server" containerID="cri-o://717d9589145339c5f9927367811718d672fce458963a94d47c240f802bf28243" gracePeriod=2 Jan 21 13:17:52 crc kubenswrapper[4765]: I0121 13:17:52.578182 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j7qn\" (UniqueName: \"kubernetes.io/projected/d35e26b9-ec61-4be2-b6f6-f40544f4094f-kube-api-access-5j7qn\") pod \"openstack-operator-index-p9ml4\" (UID: \"d35e26b9-ec61-4be2-b6f6-f40544f4094f\") " pod="openstack-operators/openstack-operator-index-p9ml4" Jan 21 13:17:52 crc kubenswrapper[4765]: I0121 13:17:52.595299 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-dlnr6" podStartSLOduration=1.979475271 podStartE2EDuration="4.595275578s" podCreationTimestamp="2026-01-21 13:17:48 +0000 UTC" firstStartedPulling="2026-01-21 13:17:49.315006673 +0000 UTC m=+930.332732495" lastFinishedPulling="2026-01-21 13:17:51.93080698 +0000 UTC m=+932.948532802" observedRunningTime="2026-01-21 13:17:52.594527517 +0000 UTC m=+933.612253339" watchObservedRunningTime="2026-01-21 13:17:52.595275578 +0000 UTC m=+933.613001400" Jan 21 13:17:52 crc kubenswrapper[4765]: I0121 13:17:52.604509 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j7qn\" (UniqueName: \"kubernetes.io/projected/d35e26b9-ec61-4be2-b6f6-f40544f4094f-kube-api-access-5j7qn\") pod \"openstack-operator-index-p9ml4\" (UID: \"d35e26b9-ec61-4be2-b6f6-f40544f4094f\") " pod="openstack-operators/openstack-operator-index-p9ml4" Jan 21 13:17:52 crc kubenswrapper[4765]: I0121 13:17:52.718017 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-p9ml4" Jan 21 13:17:52 crc kubenswrapper[4765]: I0121 13:17:52.951770 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dlnr6" Jan 21 13:17:52 crc kubenswrapper[4765]: I0121 13:17:52.983806 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px54x\" (UniqueName: \"kubernetes.io/projected/5447dc31-f07a-42e4-93ca-140afbcbb3fb-kube-api-access-px54x\") pod \"5447dc31-f07a-42e4-93ca-140afbcbb3fb\" (UID: \"5447dc31-f07a-42e4-93ca-140afbcbb3fb\") " Jan 21 13:17:52 crc kubenswrapper[4765]: I0121 13:17:52.984057 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-p9ml4"] Jan 21 13:17:52 crc kubenswrapper[4765]: I0121 13:17:52.989839 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5447dc31-f07a-42e4-93ca-140afbcbb3fb-kube-api-access-px54x" (OuterVolumeSpecName: "kube-api-access-px54x") pod "5447dc31-f07a-42e4-93ca-140afbcbb3fb" (UID: "5447dc31-f07a-42e4-93ca-140afbcbb3fb"). InnerVolumeSpecName "kube-api-access-px54x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.085287 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px54x\" (UniqueName: \"kubernetes.io/projected/5447dc31-f07a-42e4-93ca-140afbcbb3fb-kube-api-access-px54x\") on node \"crc\" DevicePath \"\"" Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.470808 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.518568 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.577462 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-p9ml4" event={"ID":"d35e26b9-ec61-4be2-b6f6-f40544f4094f","Type":"ContainerStarted","Data":"f74095447e2b7420e9dde361a2193cc8c047b8a595186cfe0bcecae22dfc924b"} Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.577515 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-p9ml4" event={"ID":"d35e26b9-ec61-4be2-b6f6-f40544f4094f","Type":"ContainerStarted","Data":"9694d8362da5465e1848cb19e6a5416293f5b5f148f2312803d7febee471958d"} Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.578850 4765 generic.go:334] "Generic (PLEG): container finished" podID="5447dc31-f07a-42e4-93ca-140afbcbb3fb" containerID="717d9589145339c5f9927367811718d672fce458963a94d47c240f802bf28243" exitCode=0 Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.578933 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dlnr6" event={"ID":"5447dc31-f07a-42e4-93ca-140afbcbb3fb","Type":"ContainerDied","Data":"717d9589145339c5f9927367811718d672fce458963a94d47c240f802bf28243"} Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.578984 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dlnr6" event={"ID":"5447dc31-f07a-42e4-93ca-140afbcbb3fb","Type":"ContainerDied","Data":"2c08250c183e226fbc327e8182a5e387c5095d42d87440cdbb912a93bf2dde58"} Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.579006 4765 scope.go:117] "RemoveContainer" containerID="717d9589145339c5f9927367811718d672fce458963a94d47c240f802bf28243" Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.578939 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dlnr6" Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.598037 4765 scope.go:117] "RemoveContainer" containerID="717d9589145339c5f9927367811718d672fce458963a94d47c240f802bf28243" Jan 21 13:17:53 crc kubenswrapper[4765]: E0121 13:17:53.599085 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"717d9589145339c5f9927367811718d672fce458963a94d47c240f802bf28243\": container with ID starting with 717d9589145339c5f9927367811718d672fce458963a94d47c240f802bf28243 not found: ID does not exist" containerID="717d9589145339c5f9927367811718d672fce458963a94d47c240f802bf28243" Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.599148 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"717d9589145339c5f9927367811718d672fce458963a94d47c240f802bf28243"} err="failed to get container status \"717d9589145339c5f9927367811718d672fce458963a94d47c240f802bf28243\": rpc error: code = NotFound desc = could not find container \"717d9589145339c5f9927367811718d672fce458963a94d47c240f802bf28243\": container with ID starting with 717d9589145339c5f9927367811718d672fce458963a94d47c240f802bf28243 not found: ID does not exist" Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.605950 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-p9ml4" podStartSLOduration=1.49028222 podStartE2EDuration="1.605932614s" podCreationTimestamp="2026-01-21 13:17:52 +0000 UTC" firstStartedPulling="2026-01-21 13:17:52.992148858 +0000 UTC m=+934.009874680" lastFinishedPulling="2026-01-21 13:17:53.107799252 +0000 UTC m=+934.125525074" observedRunningTime="2026-01-21 13:17:53.604089051 +0000 UTC m=+934.621814873" watchObservedRunningTime="2026-01-21 13:17:53.605932614 +0000 UTC m=+934.623658436" Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.621447 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-dlnr6"] Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.625753 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-dlnr6"] Jan 21 13:17:53 crc kubenswrapper[4765]: I0121 13:17:53.665972 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-skh9c" Jan 21 13:17:54 crc kubenswrapper[4765]: I0121 13:17:54.056070 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qlhwh" Jan 21 13:17:55 crc kubenswrapper[4765]: I0121 13:17:55.621174 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5447dc31-f07a-42e4-93ca-140afbcbb3fb" path="/var/lib/kubelet/pods/5447dc31-f07a-42e4-93ca-140afbcbb3fb/volumes" Jan 21 13:18:02 crc kubenswrapper[4765]: I0121 13:18:02.718555 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-p9ml4" Jan 21 13:18:02 crc kubenswrapper[4765]: I0121 13:18:02.719123 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-p9ml4" Jan 21 13:18:02 crc kubenswrapper[4765]: I0121 13:18:02.748846 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-p9ml4" Jan 21 13:18:03 crc kubenswrapper[4765]: I0121 13:18:03.473050 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-zcjrs" Jan 21 13:18:03 crc kubenswrapper[4765]: I0121 13:18:03.686965 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-p9ml4" Jan 21 13:18:08 crc kubenswrapper[4765]: I0121 13:18:08.864928 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m"] Jan 21 13:18:08 crc kubenswrapper[4765]: E0121 13:18:08.865545 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5447dc31-f07a-42e4-93ca-140afbcbb3fb" containerName="registry-server" Jan 21 13:18:08 crc kubenswrapper[4765]: I0121 13:18:08.865560 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="5447dc31-f07a-42e4-93ca-140afbcbb3fb" containerName="registry-server" Jan 21 13:18:08 crc kubenswrapper[4765]: I0121 13:18:08.865697 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="5447dc31-f07a-42e4-93ca-140afbcbb3fb" containerName="registry-server" Jan 21 13:18:08 crc kubenswrapper[4765]: I0121 13:18:08.866567 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" Jan 21 13:18:08 crc kubenswrapper[4765]: I0121 13:18:08.878816 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-6vs62" Jan 21 13:18:08 crc kubenswrapper[4765]: I0121 13:18:08.912332 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m"] Jan 21 13:18:08 crc kubenswrapper[4765]: I0121 13:18:08.973726 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20b31ee6-0264-4ffb-b43c-abbea443e89e-util\") pod \"6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m\" (UID: \"20b31ee6-0264-4ffb-b43c-abbea443e89e\") " pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" Jan 21 13:18:08 crc kubenswrapper[4765]: I0121 13:18:08.973839 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hwh6\" (UniqueName: \"kubernetes.io/projected/20b31ee6-0264-4ffb-b43c-abbea443e89e-kube-api-access-7hwh6\") pod \"6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m\" (UID: \"20b31ee6-0264-4ffb-b43c-abbea443e89e\") " pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" Jan 21 13:18:08 crc kubenswrapper[4765]: I0121 13:18:08.973914 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20b31ee6-0264-4ffb-b43c-abbea443e89e-bundle\") pod \"6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m\" (UID: \"20b31ee6-0264-4ffb-b43c-abbea443e89e\") " pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" Jan 21 13:18:09 crc kubenswrapper[4765]: I0121 13:18:09.075639 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20b31ee6-0264-4ffb-b43c-abbea443e89e-util\") pod \"6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m\" (UID: \"20b31ee6-0264-4ffb-b43c-abbea443e89e\") " pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" Jan 21 13:18:09 crc kubenswrapper[4765]: I0121 13:18:09.075715 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hwh6\" (UniqueName: \"kubernetes.io/projected/20b31ee6-0264-4ffb-b43c-abbea443e89e-kube-api-access-7hwh6\") pod \"6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m\" (UID: \"20b31ee6-0264-4ffb-b43c-abbea443e89e\") " pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" Jan 21 13:18:09 crc kubenswrapper[4765]: I0121 13:18:09.075774 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20b31ee6-0264-4ffb-b43c-abbea443e89e-bundle\") pod \"6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m\" (UID: \"20b31ee6-0264-4ffb-b43c-abbea443e89e\") " pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" Jan 21 13:18:09 crc kubenswrapper[4765]: I0121 13:18:09.076608 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20b31ee6-0264-4ffb-b43c-abbea443e89e-util\") pod \"6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m\" (UID: \"20b31ee6-0264-4ffb-b43c-abbea443e89e\") " pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" Jan 21 13:18:09 crc kubenswrapper[4765]: I0121 13:18:09.076642 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20b31ee6-0264-4ffb-b43c-abbea443e89e-bundle\") pod \"6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m\" (UID: \"20b31ee6-0264-4ffb-b43c-abbea443e89e\") " pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" Jan 21 13:18:09 crc kubenswrapper[4765]: I0121 13:18:09.095139 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hwh6\" (UniqueName: \"kubernetes.io/projected/20b31ee6-0264-4ffb-b43c-abbea443e89e-kube-api-access-7hwh6\") pod \"6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m\" (UID: \"20b31ee6-0264-4ffb-b43c-abbea443e89e\") " pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" Jan 21 13:18:09 crc kubenswrapper[4765]: I0121 13:18:09.195679 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" Jan 21 13:18:09 crc kubenswrapper[4765]: I0121 13:18:09.539353 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m"] Jan 21 13:18:09 crc kubenswrapper[4765]: W0121 13:18:09.540121 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20b31ee6_0264_4ffb_b43c_abbea443e89e.slice/crio-9894358d059ed1e5bfc6738932a0538d39138cc2622685379d0720f37f62d0a0 WatchSource:0}: Error finding container 9894358d059ed1e5bfc6738932a0538d39138cc2622685379d0720f37f62d0a0: Status 404 returned error can't find the container with id 9894358d059ed1e5bfc6738932a0538d39138cc2622685379d0720f37f62d0a0 Jan 21 13:18:09 crc kubenswrapper[4765]: I0121 13:18:09.708583 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" event={"ID":"20b31ee6-0264-4ffb-b43c-abbea443e89e","Type":"ContainerStarted","Data":"9894358d059ed1e5bfc6738932a0538d39138cc2622685379d0720f37f62d0a0"} Jan 21 13:18:12 crc kubenswrapper[4765]: I0121 13:18:12.729491 4765 generic.go:334] "Generic (PLEG): container finished" podID="20b31ee6-0264-4ffb-b43c-abbea443e89e" containerID="49402692e45a1a149a42dd9049ebfe93c26056cfa727c5f73dd2165223d552dd" exitCode=0 Jan 21 13:18:12 crc kubenswrapper[4765]: I0121 13:18:12.729646 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" event={"ID":"20b31ee6-0264-4ffb-b43c-abbea443e89e","Type":"ContainerDied","Data":"49402692e45a1a149a42dd9049ebfe93c26056cfa727c5f73dd2165223d552dd"} Jan 21 13:18:13 crc kubenswrapper[4765]: I0121 13:18:13.819041 4765 generic.go:334] "Generic (PLEG): container finished" podID="20b31ee6-0264-4ffb-b43c-abbea443e89e" containerID="2f23fa268080f90a37d06800f3aa3222abc3bf5ccdd6d2bb74e454cefa50ce93" exitCode=0 Jan 21 13:18:13 crc kubenswrapper[4765]: I0121 13:18:13.820968 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" event={"ID":"20b31ee6-0264-4ffb-b43c-abbea443e89e","Type":"ContainerDied","Data":"2f23fa268080f90a37d06800f3aa3222abc3bf5ccdd6d2bb74e454cefa50ce93"} Jan 21 13:18:14 crc kubenswrapper[4765]: I0121 13:18:14.445545 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:18:14 crc kubenswrapper[4765]: I0121 13:18:14.445887 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:18:14 crc kubenswrapper[4765]: I0121 13:18:14.831977 4765 generic.go:334] "Generic (PLEG): container finished" podID="20b31ee6-0264-4ffb-b43c-abbea443e89e" containerID="230b5315137e47af8e1bd359ecb6193925c62cb475b06f8e48876a8947d21ccb" exitCode=0 Jan 21 13:18:14 crc kubenswrapper[4765]: I0121 13:18:14.832032 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" event={"ID":"20b31ee6-0264-4ffb-b43c-abbea443e89e","Type":"ContainerDied","Data":"230b5315137e47af8e1bd359ecb6193925c62cb475b06f8e48876a8947d21ccb"} Jan 21 13:18:16 crc kubenswrapper[4765]: I0121 13:18:16.146341 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" Jan 21 13:18:16 crc kubenswrapper[4765]: I0121 13:18:16.290933 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hwh6\" (UniqueName: \"kubernetes.io/projected/20b31ee6-0264-4ffb-b43c-abbea443e89e-kube-api-access-7hwh6\") pod \"20b31ee6-0264-4ffb-b43c-abbea443e89e\" (UID: \"20b31ee6-0264-4ffb-b43c-abbea443e89e\") " Jan 21 13:18:16 crc kubenswrapper[4765]: I0121 13:18:16.290988 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20b31ee6-0264-4ffb-b43c-abbea443e89e-util\") pod \"20b31ee6-0264-4ffb-b43c-abbea443e89e\" (UID: \"20b31ee6-0264-4ffb-b43c-abbea443e89e\") " Jan 21 13:18:16 crc kubenswrapper[4765]: I0121 13:18:16.291053 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20b31ee6-0264-4ffb-b43c-abbea443e89e-bundle\") pod \"20b31ee6-0264-4ffb-b43c-abbea443e89e\" (UID: \"20b31ee6-0264-4ffb-b43c-abbea443e89e\") " Jan 21 13:18:16 crc kubenswrapper[4765]: I0121 13:18:16.291686 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20b31ee6-0264-4ffb-b43c-abbea443e89e-bundle" (OuterVolumeSpecName: "bundle") pod "20b31ee6-0264-4ffb-b43c-abbea443e89e" (UID: "20b31ee6-0264-4ffb-b43c-abbea443e89e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:18:16 crc kubenswrapper[4765]: I0121 13:18:16.298447 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b31ee6-0264-4ffb-b43c-abbea443e89e-kube-api-access-7hwh6" (OuterVolumeSpecName: "kube-api-access-7hwh6") pod "20b31ee6-0264-4ffb-b43c-abbea443e89e" (UID: "20b31ee6-0264-4ffb-b43c-abbea443e89e"). InnerVolumeSpecName "kube-api-access-7hwh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:18:16 crc kubenswrapper[4765]: I0121 13:18:16.305204 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20b31ee6-0264-4ffb-b43c-abbea443e89e-util" (OuterVolumeSpecName: "util") pod "20b31ee6-0264-4ffb-b43c-abbea443e89e" (UID: "20b31ee6-0264-4ffb-b43c-abbea443e89e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:18:16 crc kubenswrapper[4765]: I0121 13:18:16.392578 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hwh6\" (UniqueName: \"kubernetes.io/projected/20b31ee6-0264-4ffb-b43c-abbea443e89e-kube-api-access-7hwh6\") on node \"crc\" DevicePath \"\"" Jan 21 13:18:16 crc kubenswrapper[4765]: I0121 13:18:16.392623 4765 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20b31ee6-0264-4ffb-b43c-abbea443e89e-util\") on node \"crc\" DevicePath \"\"" Jan 21 13:18:16 crc kubenswrapper[4765]: I0121 13:18:16.392634 4765 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20b31ee6-0264-4ffb-b43c-abbea443e89e-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:18:16 crc kubenswrapper[4765]: I0121 13:18:16.847725 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" event={"ID":"20b31ee6-0264-4ffb-b43c-abbea443e89e","Type":"ContainerDied","Data":"9894358d059ed1e5bfc6738932a0538d39138cc2622685379d0720f37f62d0a0"} Jan 21 13:18:16 crc kubenswrapper[4765]: I0121 13:18:16.847773 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9894358d059ed1e5bfc6738932a0538d39138cc2622685379d0720f37f62d0a0" Jan 21 13:18:16 crc kubenswrapper[4765]: I0121 13:18:16.847843 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m" Jan 21 13:18:21 crc kubenswrapper[4765]: I0121 13:18:21.802046 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-ccbfb74b7-bm4rb"] Jan 21 13:18:21 crc kubenswrapper[4765]: E0121 13:18:21.803043 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b31ee6-0264-4ffb-b43c-abbea443e89e" containerName="extract" Jan 21 13:18:21 crc kubenswrapper[4765]: I0121 13:18:21.803059 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b31ee6-0264-4ffb-b43c-abbea443e89e" containerName="extract" Jan 21 13:18:21 crc kubenswrapper[4765]: E0121 13:18:21.803082 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b31ee6-0264-4ffb-b43c-abbea443e89e" containerName="util" Jan 21 13:18:21 crc kubenswrapper[4765]: I0121 13:18:21.803092 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b31ee6-0264-4ffb-b43c-abbea443e89e" containerName="util" Jan 21 13:18:21 crc kubenswrapper[4765]: E0121 13:18:21.803133 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b31ee6-0264-4ffb-b43c-abbea443e89e" containerName="pull" Jan 21 13:18:21 crc kubenswrapper[4765]: I0121 13:18:21.803141 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b31ee6-0264-4ffb-b43c-abbea443e89e" containerName="pull" Jan 21 13:18:21 crc kubenswrapper[4765]: I0121 13:18:21.803298 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b31ee6-0264-4ffb-b43c-abbea443e89e" containerName="extract" Jan 21 13:18:21 crc kubenswrapper[4765]: I0121 13:18:21.803822 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-ccbfb74b7-bm4rb" Jan 21 13:18:21 crc kubenswrapper[4765]: I0121 13:18:21.810347 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-67h7d" Jan 21 13:18:21 crc kubenswrapper[4765]: I0121 13:18:21.863371 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsrpd\" (UniqueName: \"kubernetes.io/projected/5db9c466-59ec-47fb-8643-560935c3c92c-kube-api-access-gsrpd\") pod \"openstack-operator-controller-init-ccbfb74b7-bm4rb\" (UID: \"5db9c466-59ec-47fb-8643-560935c3c92c\") " pod="openstack-operators/openstack-operator-controller-init-ccbfb74b7-bm4rb" Jan 21 13:18:21 crc kubenswrapper[4765]: I0121 13:18:21.893997 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-ccbfb74b7-bm4rb"] Jan 21 13:18:21 crc kubenswrapper[4765]: I0121 13:18:21.964443 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsrpd\" (UniqueName: \"kubernetes.io/projected/5db9c466-59ec-47fb-8643-560935c3c92c-kube-api-access-gsrpd\") pod \"openstack-operator-controller-init-ccbfb74b7-bm4rb\" (UID: \"5db9c466-59ec-47fb-8643-560935c3c92c\") " pod="openstack-operators/openstack-operator-controller-init-ccbfb74b7-bm4rb" Jan 21 13:18:21 crc kubenswrapper[4765]: I0121 13:18:21.995844 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsrpd\" (UniqueName: \"kubernetes.io/projected/5db9c466-59ec-47fb-8643-560935c3c92c-kube-api-access-gsrpd\") pod \"openstack-operator-controller-init-ccbfb74b7-bm4rb\" (UID: \"5db9c466-59ec-47fb-8643-560935c3c92c\") " pod="openstack-operators/openstack-operator-controller-init-ccbfb74b7-bm4rb" Jan 21 13:18:22 crc kubenswrapper[4765]: I0121 13:18:22.118527 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-ccbfb74b7-bm4rb" Jan 21 13:18:22 crc kubenswrapper[4765]: I0121 13:18:22.598613 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-ccbfb74b7-bm4rb"] Jan 21 13:18:22 crc kubenswrapper[4765]: W0121 13:18:22.618796 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5db9c466_59ec_47fb_8643_560935c3c92c.slice/crio-f845e8ccb6a465d940a576f9b15cdcdaea325dcdd218af1d1f5b5e766fd75c56 WatchSource:0}: Error finding container f845e8ccb6a465d940a576f9b15cdcdaea325dcdd218af1d1f5b5e766fd75c56: Status 404 returned error can't find the container with id f845e8ccb6a465d940a576f9b15cdcdaea325dcdd218af1d1f5b5e766fd75c56 Jan 21 13:18:22 crc kubenswrapper[4765]: I0121 13:18:22.897826 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-ccbfb74b7-bm4rb" event={"ID":"5db9c466-59ec-47fb-8643-560935c3c92c","Type":"ContainerStarted","Data":"f845e8ccb6a465d940a576f9b15cdcdaea325dcdd218af1d1f5b5e766fd75c56"} Jan 21 13:18:28 crc kubenswrapper[4765]: I0121 13:18:28.937800 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-ccbfb74b7-bm4rb" event={"ID":"5db9c466-59ec-47fb-8643-560935c3c92c","Type":"ContainerStarted","Data":"b65a677bbe2f98a824c02865aa4529f6b77fd92eb0a6ff72d6f1bbc299b87b29"} Jan 21 13:18:28 crc kubenswrapper[4765]: I0121 13:18:28.938339 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-ccbfb74b7-bm4rb" Jan 21 13:18:28 crc kubenswrapper[4765]: I0121 13:18:28.968473 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-ccbfb74b7-bm4rb" podStartSLOduration=2.7953323599999997 podStartE2EDuration="7.968454647s" podCreationTimestamp="2026-01-21 13:18:21 +0000 UTC" firstStartedPulling="2026-01-21 13:18:22.6358769 +0000 UTC m=+963.653602722" lastFinishedPulling="2026-01-21 13:18:27.808999187 +0000 UTC m=+968.826725009" observedRunningTime="2026-01-21 13:18:28.964109182 +0000 UTC m=+969.981835024" watchObservedRunningTime="2026-01-21 13:18:28.968454647 +0000 UTC m=+969.986180479" Jan 21 13:18:32 crc kubenswrapper[4765]: I0121 13:18:32.122015 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-ccbfb74b7-bm4rb" Jan 21 13:18:44 crc kubenswrapper[4765]: I0121 13:18:44.446848 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:18:44 crc kubenswrapper[4765]: I0121 13:18:44.448186 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:18:49 crc kubenswrapper[4765]: I0121 13:18:49.399074 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hm7zd"] Jan 21 13:18:49 crc kubenswrapper[4765]: I0121 13:18:49.401038 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:18:49 crc kubenswrapper[4765]: I0121 13:18:49.430791 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hm7zd"] Jan 21 13:18:49 crc kubenswrapper[4765]: I0121 13:18:49.529287 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9528dde3-6eb5-4247-84e7-945a4fa7083b-utilities\") pod \"community-operators-hm7zd\" (UID: \"9528dde3-6eb5-4247-84e7-945a4fa7083b\") " pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:18:49 crc kubenswrapper[4765]: I0121 13:18:49.529360 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk4bd\" (UniqueName: \"kubernetes.io/projected/9528dde3-6eb5-4247-84e7-945a4fa7083b-kube-api-access-tk4bd\") pod \"community-operators-hm7zd\" (UID: \"9528dde3-6eb5-4247-84e7-945a4fa7083b\") " pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:18:49 crc kubenswrapper[4765]: I0121 13:18:49.529400 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9528dde3-6eb5-4247-84e7-945a4fa7083b-catalog-content\") pod \"community-operators-hm7zd\" (UID: \"9528dde3-6eb5-4247-84e7-945a4fa7083b\") " pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:18:49 crc kubenswrapper[4765]: I0121 13:18:49.631252 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9528dde3-6eb5-4247-84e7-945a4fa7083b-utilities\") pod \"community-operators-hm7zd\" (UID: \"9528dde3-6eb5-4247-84e7-945a4fa7083b\") " pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:18:49 crc kubenswrapper[4765]: I0121 13:18:49.631336 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk4bd\" (UniqueName: \"kubernetes.io/projected/9528dde3-6eb5-4247-84e7-945a4fa7083b-kube-api-access-tk4bd\") pod \"community-operators-hm7zd\" (UID: \"9528dde3-6eb5-4247-84e7-945a4fa7083b\") " pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:18:49 crc kubenswrapper[4765]: I0121 13:18:49.631363 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9528dde3-6eb5-4247-84e7-945a4fa7083b-catalog-content\") pod \"community-operators-hm7zd\" (UID: \"9528dde3-6eb5-4247-84e7-945a4fa7083b\") " pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:18:49 crc kubenswrapper[4765]: I0121 13:18:49.631896 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9528dde3-6eb5-4247-84e7-945a4fa7083b-utilities\") pod \"community-operators-hm7zd\" (UID: \"9528dde3-6eb5-4247-84e7-945a4fa7083b\") " pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:18:49 crc kubenswrapper[4765]: I0121 13:18:49.631910 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9528dde3-6eb5-4247-84e7-945a4fa7083b-catalog-content\") pod \"community-operators-hm7zd\" (UID: \"9528dde3-6eb5-4247-84e7-945a4fa7083b\") " pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:18:49 crc kubenswrapper[4765]: I0121 13:18:49.664003 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk4bd\" (UniqueName: \"kubernetes.io/projected/9528dde3-6eb5-4247-84e7-945a4fa7083b-kube-api-access-tk4bd\") pod \"community-operators-hm7zd\" (UID: \"9528dde3-6eb5-4247-84e7-945a4fa7083b\") " pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:18:49 crc kubenswrapper[4765]: I0121 13:18:49.721005 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:18:50 crc kubenswrapper[4765]: I0121 13:18:50.482958 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hm7zd"] Jan 21 13:18:51 crc kubenswrapper[4765]: I0121 13:18:51.306145 4765 generic.go:334] "Generic (PLEG): container finished" podID="9528dde3-6eb5-4247-84e7-945a4fa7083b" containerID="05673f5a6f41b94010b66b42185564fdacd1cfc5b780ec50ad927c54c5ffa200" exitCode=0 Jan 21 13:18:51 crc kubenswrapper[4765]: I0121 13:18:51.306241 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm7zd" event={"ID":"9528dde3-6eb5-4247-84e7-945a4fa7083b","Type":"ContainerDied","Data":"05673f5a6f41b94010b66b42185564fdacd1cfc5b780ec50ad927c54c5ffa200"} Jan 21 13:18:51 crc kubenswrapper[4765]: I0121 13:18:51.306954 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm7zd" event={"ID":"9528dde3-6eb5-4247-84e7-945a4fa7083b","Type":"ContainerStarted","Data":"9feef96e2bb5ec95ffa1a8a70a68a041c4516e58a404cd8ee9b38df10964dee5"} Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.321944 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm7zd" event={"ID":"9528dde3-6eb5-4247-84e7-945a4fa7083b","Type":"ContainerStarted","Data":"06cee472d8bb2213748819f73825358be0afc0338f9125f8a596acf0caff0c19"} Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.499633 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-848df65fbb-79lv9"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.504653 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-848df65fbb-79lv9" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.507267 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-xvdxz" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.512043 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-kq85p"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.512797 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-kq85p" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.517378 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-hwvjz" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.531285 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-kq85p"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.534376 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-848df65fbb-79lv9"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.593690 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjb2p\" (UniqueName: \"kubernetes.io/projected/cd5b6743-7a2a-4d03-8adc-952fb87e6f02-kube-api-access-jjb2p\") pod \"cinder-operator-controller-manager-9b68f5989-kq85p\" (UID: \"cd5b6743-7a2a-4d03-8adc-952fb87e6f02\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-kq85p" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.593811 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmttf\" (UniqueName: \"kubernetes.io/projected/448c57b9-0176-42e1-a493-609bc853db01-kube-api-access-zmttf\") pod \"barbican-operator-controller-manager-848df65fbb-79lv9\" (UID: \"448c57b9-0176-42e1-a493-609bc853db01\") " pod="openstack-operators/barbican-operator-controller-manager-848df65fbb-79lv9" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.600322 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-dgbtx"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.601444 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dgbtx" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.606139 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-rdrwx" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.632333 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-65hfk"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.632964 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-dgbtx"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.632983 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-8pvpr"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.633565 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-65hfk"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.633670 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-8pvpr" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.634032 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-65hfk" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.641000 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-ct8qb" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.643646 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vfh7w" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.653792 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-8pvpr"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.694497 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfrct\" (UniqueName: \"kubernetes.io/projected/ab7eaa76-7a22-4d3c-85a3-9b643832d707-kube-api-access-lfrct\") pod \"heat-operator-controller-manager-594c8c9d5d-8pvpr\" (UID: \"ab7eaa76-7a22-4d3c-85a3-9b643832d707\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-8pvpr" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.694575 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b7hq\" (UniqueName: \"kubernetes.io/projected/079ac5a2-3654-48e8-8bf0-597018fc2ca5-kube-api-access-2b7hq\") pod \"designate-operator-controller-manager-9f958b845-dgbtx\" (UID: \"079ac5a2-3654-48e8-8bf0-597018fc2ca5\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-dgbtx" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.694608 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmttf\" (UniqueName: \"kubernetes.io/projected/448c57b9-0176-42e1-a493-609bc853db01-kube-api-access-zmttf\") pod \"barbican-operator-controller-manager-848df65fbb-79lv9\" (UID: \"448c57b9-0176-42e1-a493-609bc853db01\") " pod="openstack-operators/barbican-operator-controller-manager-848df65fbb-79lv9" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.694629 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjb2p\" (UniqueName: \"kubernetes.io/projected/cd5b6743-7a2a-4d03-8adc-952fb87e6f02-kube-api-access-jjb2p\") pod \"cinder-operator-controller-manager-9b68f5989-kq85p\" (UID: \"cd5b6743-7a2a-4d03-8adc-952fb87e6f02\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-kq85p" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.694691 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrcl\" (UniqueName: \"kubernetes.io/projected/4c92e105-ba8b-4828-bc30-857c5431672f-kube-api-access-nhrcl\") pod \"glance-operator-controller-manager-c6994669c-65hfk\" (UID: \"4c92e105-ba8b-4828-bc30-857c5431672f\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-65hfk" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.696159 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-t42c2"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.696915 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-t42c2" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.704318 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-74wth" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.704789 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.705533 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.711624 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.711636 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7q2wn" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.759329 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-rk4x7"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.760562 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rk4x7" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.786015 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-mjw7p" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.786474 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjb2p\" (UniqueName: \"kubernetes.io/projected/cd5b6743-7a2a-4d03-8adc-952fb87e6f02-kube-api-access-jjb2p\") pod \"cinder-operator-controller-manager-9b68f5989-kq85p\" (UID: \"cd5b6743-7a2a-4d03-8adc-952fb87e6f02\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-kq85p" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.789989 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmttf\" (UniqueName: \"kubernetes.io/projected/448c57b9-0176-42e1-a493-609bc853db01-kube-api-access-zmttf\") pod \"barbican-operator-controller-manager-848df65fbb-79lv9\" (UID: \"448c57b9-0176-42e1-a493-609bc853db01\") " pod="openstack-operators/barbican-operator-controller-manager-848df65fbb-79lv9" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.791408 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.814957 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ddbq\" (UniqueName: \"kubernetes.io/projected/2a3c28ee-e170-4592-8291-db76c15675d1-kube-api-access-7ddbq\") pod \"ironic-operator-controller-manager-78757b4889-rk4x7\" (UID: \"2a3c28ee-e170-4592-8291-db76c15675d1\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rk4x7" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.815030 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfrct\" (UniqueName: \"kubernetes.io/projected/ab7eaa76-7a22-4d3c-85a3-9b643832d707-kube-api-access-lfrct\") pod \"heat-operator-controller-manager-594c8c9d5d-8pvpr\" (UID: \"ab7eaa76-7a22-4d3c-85a3-9b643832d707\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-8pvpr" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.815089 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b7hq\" (UniqueName: \"kubernetes.io/projected/079ac5a2-3654-48e8-8bf0-597018fc2ca5-kube-api-access-2b7hq\") pod \"designate-operator-controller-manager-9f958b845-dgbtx\" (UID: \"079ac5a2-3654-48e8-8bf0-597018fc2ca5\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-dgbtx" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.815133 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert\") pod \"infra-operator-controller-manager-77c48c7859-c74jr\" (UID: \"2962f7bb-1d22-4715-b609-2eb6da1de834\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.815166 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlwxr\" (UniqueName: \"kubernetes.io/projected/00c36135-159f-43be-be7c-b4f01cf2ace7-kube-api-access-xlwxr\") pod \"horizon-operator-controller-manager-77d5c5b54f-t42c2\" (UID: \"00c36135-159f-43be-be7c-b4f01cf2ace7\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-t42c2" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.815201 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhrcl\" (UniqueName: \"kubernetes.io/projected/4c92e105-ba8b-4828-bc30-857c5431672f-kube-api-access-nhrcl\") pod \"glance-operator-controller-manager-c6994669c-65hfk\" (UID: \"4c92e105-ba8b-4828-bc30-857c5431672f\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-65hfk" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.815264 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5phsw\" (UniqueName: \"kubernetes.io/projected/2962f7bb-1d22-4715-b609-2eb6da1de834-kube-api-access-5phsw\") pod \"infra-operator-controller-manager-77c48c7859-c74jr\" (UID: \"2962f7bb-1d22-4715-b609-2eb6da1de834\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.825414 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-hv2dn"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.826464 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-hv2dn" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.838523 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-f27lr" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.838993 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-rk4x7"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.846594 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-t42c2"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.853291 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-hv2dn"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.863311 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.864299 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.875933 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b7hq\" (UniqueName: \"kubernetes.io/projected/079ac5a2-3654-48e8-8bf0-597018fc2ca5-kube-api-access-2b7hq\") pod \"designate-operator-controller-manager-9f958b845-dgbtx\" (UID: \"079ac5a2-3654-48e8-8bf0-597018fc2ca5\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-dgbtx" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.876308 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-49f2p" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.879621 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-848df65fbb-79lv9" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.882035 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.890020 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhrcl\" (UniqueName: \"kubernetes.io/projected/4c92e105-ba8b-4828-bc30-857c5431672f-kube-api-access-nhrcl\") pod \"glance-operator-controller-manager-c6994669c-65hfk\" (UID: \"4c92e105-ba8b-4828-bc30-857c5431672f\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-65hfk" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.896532 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-kq85p" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.896752 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfrct\" (UniqueName: \"kubernetes.io/projected/ab7eaa76-7a22-4d3c-85a3-9b643832d707-kube-api-access-lfrct\") pod \"heat-operator-controller-manager-594c8c9d5d-8pvpr\" (UID: \"ab7eaa76-7a22-4d3c-85a3-9b643832d707\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-8pvpr" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.927544 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dgbtx" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.928689 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlwxr\" (UniqueName: \"kubernetes.io/projected/00c36135-159f-43be-be7c-b4f01cf2ace7-kube-api-access-xlwxr\") pod \"horizon-operator-controller-manager-77d5c5b54f-t42c2\" (UID: \"00c36135-159f-43be-be7c-b4f01cf2ace7\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-t42c2" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.928756 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfn99\" (UniqueName: \"kubernetes.io/projected/c78d0245-2ac0-4576-860f-20c8ad7f7fa3-kube-api-access-pfn99\") pod \"manila-operator-controller-manager-864f6b75bf-rxxvb\" (UID: \"c78d0245-2ac0-4576-860f-20c8ad7f7fa3\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.928791 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5phsw\" (UniqueName: \"kubernetes.io/projected/2962f7bb-1d22-4715-b609-2eb6da1de834-kube-api-access-5phsw\") pod \"infra-operator-controller-manager-77c48c7859-c74jr\" (UID: \"2962f7bb-1d22-4715-b609-2eb6da1de834\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.928822 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ddbq\" (UniqueName: \"kubernetes.io/projected/2a3c28ee-e170-4592-8291-db76c15675d1-kube-api-access-7ddbq\") pod \"ironic-operator-controller-manager-78757b4889-rk4x7\" (UID: \"2a3c28ee-e170-4592-8291-db76c15675d1\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rk4x7" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.928854 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9szsz\" (UniqueName: \"kubernetes.io/projected/30a8ff01-0173-45a7-9460-9df64146234d-kube-api-access-9szsz\") pod \"keystone-operator-controller-manager-767fdc4f47-hv2dn\" (UID: \"30a8ff01-0173-45a7-9460-9df64146234d\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-hv2dn" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.928889 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert\") pod \"infra-operator-controller-manager-77c48c7859-c74jr\" (UID: \"2962f7bb-1d22-4715-b609-2eb6da1de834\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:18:53 crc kubenswrapper[4765]: E0121 13:18:53.929019 4765 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 13:18:53 crc kubenswrapper[4765]: E0121 13:18:53.929068 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert podName:2962f7bb-1d22-4715-b609-2eb6da1de834 nodeName:}" failed. No retries permitted until 2026-01-21 13:18:54.429049687 +0000 UTC m=+995.446775509 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert") pod "infra-operator-controller-manager-77c48c7859-c74jr" (UID: "2962f7bb-1d22-4715-b609-2eb6da1de834") : secret "infra-operator-webhook-server-cert" not found Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.938870 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-8kq4g"] Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.942861 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8kq4g" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.947946 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-qdxjp" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.956735 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-8pvpr" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.981387 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-65hfk" Jan 21 13:18:53 crc kubenswrapper[4765]: I0121 13:18:53.993434 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-8kq4g"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.030467 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9szsz\" (UniqueName: \"kubernetes.io/projected/30a8ff01-0173-45a7-9460-9df64146234d-kube-api-access-9szsz\") pod \"keystone-operator-controller-manager-767fdc4f47-hv2dn\" (UID: \"30a8ff01-0173-45a7-9460-9df64146234d\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-hv2dn" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.030519 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2zkg\" (UniqueName: \"kubernetes.io/projected/ecd5f054-6284-485a-8c41-6b2338a5c0f4-kube-api-access-f2zkg\") pod \"mariadb-operator-controller-manager-c87fff755-8kq4g\" (UID: \"ecd5f054-6284-485a-8c41-6b2338a5c0f4\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8kq4g" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.030602 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfn99\" (UniqueName: \"kubernetes.io/projected/c78d0245-2ac0-4576-860f-20c8ad7f7fa3-kube-api-access-pfn99\") pod \"manila-operator-controller-manager-864f6b75bf-rxxvb\" (UID: \"c78d0245-2ac0-4576-860f-20c8ad7f7fa3\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.072925 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ddbq\" (UniqueName: \"kubernetes.io/projected/2a3c28ee-e170-4592-8291-db76c15675d1-kube-api-access-7ddbq\") pod \"ironic-operator-controller-manager-78757b4889-rk4x7\" (UID: \"2a3c28ee-e170-4592-8291-db76c15675d1\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rk4x7" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.126444 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-r429h"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.127289 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-r429h" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.134042 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pjnzr" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.134894 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2zkg\" (UniqueName: \"kubernetes.io/projected/ecd5f054-6284-485a-8c41-6b2338a5c0f4-kube-api-access-f2zkg\") pod \"mariadb-operator-controller-manager-c87fff755-8kq4g\" (UID: \"ecd5f054-6284-485a-8c41-6b2338a5c0f4\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8kq4g" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.144778 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.145834 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.152744 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rk4x7" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.160270 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-m48zr"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.161145 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-m48zr" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.163568 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-r429h"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.169178 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-d8b8j" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.172971 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-lt9sh" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.213920 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfn99\" (UniqueName: \"kubernetes.io/projected/c78d0245-2ac0-4576-860f-20c8ad7f7fa3-kube-api-access-pfn99\") pod \"manila-operator-controller-manager-864f6b75bf-rxxvb\" (UID: \"c78d0245-2ac0-4576-860f-20c8ad7f7fa3\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.214512 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5phsw\" (UniqueName: \"kubernetes.io/projected/2962f7bb-1d22-4715-b609-2eb6da1de834-kube-api-access-5phsw\") pod \"infra-operator-controller-manager-77c48c7859-c74jr\" (UID: \"2962f7bb-1d22-4715-b609-2eb6da1de834\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.215062 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlwxr\" (UniqueName: \"kubernetes.io/projected/00c36135-159f-43be-be7c-b4f01cf2ace7-kube-api-access-xlwxr\") pod \"horizon-operator-controller-manager-77d5c5b54f-t42c2\" (UID: \"00c36135-159f-43be-be7c-b4f01cf2ace7\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-t42c2" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.217842 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9szsz\" (UniqueName: \"kubernetes.io/projected/30a8ff01-0173-45a7-9460-9df64146234d-kube-api-access-9szsz\") pod \"keystone-operator-controller-manager-767fdc4f47-hv2dn\" (UID: \"30a8ff01-0173-45a7-9460-9df64146234d\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-hv2dn" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.220114 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.220632 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2zkg\" (UniqueName: \"kubernetes.io/projected/ecd5f054-6284-485a-8c41-6b2338a5c0f4-kube-api-access-f2zkg\") pod \"mariadb-operator-controller-manager-c87fff755-8kq4g\" (UID: \"ecd5f054-6284-485a-8c41-6b2338a5c0f4\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8kq4g" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.229124 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8kq4g" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.238010 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vfpc\" (UniqueName: \"kubernetes.io/projected/953ef395-07f2-4b90-8232-77b94a176094-kube-api-access-2vfpc\") pod \"nova-operator-controller-manager-65849867d6-m48zr\" (UID: \"953ef395-07f2-4b90-8232-77b94a176094\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-m48zr" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.238172 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxj7n\" (UniqueName: \"kubernetes.io/projected/882965e2-7eb0-4971-9770-e750a8fe36dc-kube-api-access-wxj7n\") pod \"octavia-operator-controller-manager-7fc9b76cf6-kh677\" (UID: \"882965e2-7eb0-4971-9770-e750a8fe36dc\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.238302 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8xkw\" (UniqueName: \"kubernetes.io/projected/bdcf568f-99c9-4432-b763-ce16903da409-kube-api-access-d8xkw\") pod \"neutron-operator-controller-manager-cb4666565-r429h\" (UID: \"bdcf568f-99c9-4432-b763-ce16903da409\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-r429h" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.238438 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-m48zr"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.255688 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.256638 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.281290 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9nnq9" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.281440 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.312631 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-kvhff"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.313771 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-kvhff" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.342603 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8xkw\" (UniqueName: \"kubernetes.io/projected/bdcf568f-99c9-4432-b763-ce16903da409-kube-api-access-d8xkw\") pod \"neutron-operator-controller-manager-cb4666565-r429h\" (UID: \"bdcf568f-99c9-4432-b763-ce16903da409\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-r429h" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.342668 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpr88\" (UniqueName: \"kubernetes.io/projected/246657ac-def3-41ce-bd99-a8d00d97c86b-kube-api-access-xpr88\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7\" (UID: \"246657ac-def3-41ce-bd99-a8d00d97c86b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.342701 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7\" (UID: \"246657ac-def3-41ce-bd99-a8d00d97c86b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.342721 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vfpc\" (UniqueName: \"kubernetes.io/projected/953ef395-07f2-4b90-8232-77b94a176094-kube-api-access-2vfpc\") pod \"nova-operator-controller-manager-65849867d6-m48zr\" (UID: \"953ef395-07f2-4b90-8232-77b94a176094\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-m48zr" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.343165 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxj7n\" (UniqueName: \"kubernetes.io/projected/882965e2-7eb0-4971-9770-e750a8fe36dc-kube-api-access-wxj7n\") pod \"octavia-operator-controller-manager-7fc9b76cf6-kh677\" (UID: \"882965e2-7eb0-4971-9770-e750a8fe36dc\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.348588 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-r28bw" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.348830 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-t42c2" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.362222 4765 generic.go:334] "Generic (PLEG): container finished" podID="9528dde3-6eb5-4247-84e7-945a4fa7083b" containerID="06cee472d8bb2213748819f73825358be0afc0338f9125f8a596acf0caff0c19" exitCode=0 Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.362282 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm7zd" event={"ID":"9528dde3-6eb5-4247-84e7-945a4fa7083b","Type":"ContainerDied","Data":"06cee472d8bb2213748819f73825358be0afc0338f9125f8a596acf0caff0c19"} Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.394711 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-97x9c"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.395674 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-97x9c" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.412053 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-gtppk" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.459996 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-hv2dn" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.460938 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.463619 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57dlp\" (UniqueName: \"kubernetes.io/projected/17d3ffc3-5383-4beb-91d4-db120ddb1c74-kube-api-access-57dlp\") pod \"ovn-operator-controller-manager-55db956ddc-kvhff\" (UID: \"17d3ffc3-5383-4beb-91d4-db120ddb1c74\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-kvhff" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.463695 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtzfk\" (UniqueName: \"kubernetes.io/projected/2bc79302-e5a0-4288-8b2e-ee371eb775a1-kube-api-access-vtzfk\") pod \"placement-operator-controller-manager-686df47fcb-97x9c\" (UID: \"2bc79302-e5a0-4288-8b2e-ee371eb775a1\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-97x9c" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.463803 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert\") pod \"infra-operator-controller-manager-77c48c7859-c74jr\" (UID: \"2962f7bb-1d22-4715-b609-2eb6da1de834\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.463856 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpr88\" (UniqueName: \"kubernetes.io/projected/246657ac-def3-41ce-bd99-a8d00d97c86b-kube-api-access-xpr88\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7\" (UID: \"246657ac-def3-41ce-bd99-a8d00d97c86b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.463893 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7\" (UID: \"246657ac-def3-41ce-bd99-a8d00d97c86b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:18:54 crc kubenswrapper[4765]: E0121 13:18:54.464039 4765 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:18:54 crc kubenswrapper[4765]: E0121 13:18:54.464108 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert podName:246657ac-def3-41ce-bd99-a8d00d97c86b nodeName:}" failed. No retries permitted until 2026-01-21 13:18:54.964089162 +0000 UTC m=+995.981814984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" (UID: "246657ac-def3-41ce-bd99-a8d00d97c86b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.465104 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vfpc\" (UniqueName: \"kubernetes.io/projected/953ef395-07f2-4b90-8232-77b94a176094-kube-api-access-2vfpc\") pod \"nova-operator-controller-manager-65849867d6-m48zr\" (UID: \"953ef395-07f2-4b90-8232-77b94a176094\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-m48zr" Jan 21 13:18:54 crc kubenswrapper[4765]: E0121 13:18:54.465280 4765 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 13:18:54 crc kubenswrapper[4765]: E0121 13:18:54.465337 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert podName:2962f7bb-1d22-4715-b609-2eb6da1de834 nodeName:}" failed. No retries permitted until 2026-01-21 13:18:55.465316097 +0000 UTC m=+996.483041989 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert") pod "infra-operator-controller-manager-77c48c7859-c74jr" (UID: "2962f7bb-1d22-4715-b609-2eb6da1de834") : secret "infra-operator-webhook-server-cert" not found Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.491841 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxj7n\" (UniqueName: \"kubernetes.io/projected/882965e2-7eb0-4971-9770-e750a8fe36dc-kube-api-access-wxj7n\") pod \"octavia-operator-controller-manager-7fc9b76cf6-kh677\" (UID: \"882965e2-7eb0-4971-9770-e750a8fe36dc\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.517173 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-kvhff"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.557854 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpr88\" (UniqueName: \"kubernetes.io/projected/246657ac-def3-41ce-bd99-a8d00d97c86b-kube-api-access-xpr88\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7\" (UID: \"246657ac-def3-41ce-bd99-a8d00d97c86b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.568273 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.581676 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57dlp\" (UniqueName: \"kubernetes.io/projected/17d3ffc3-5383-4beb-91d4-db120ddb1c74-kube-api-access-57dlp\") pod \"ovn-operator-controller-manager-55db956ddc-kvhff\" (UID: \"17d3ffc3-5383-4beb-91d4-db120ddb1c74\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-kvhff" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.581766 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtzfk\" (UniqueName: \"kubernetes.io/projected/2bc79302-e5a0-4288-8b2e-ee371eb775a1-kube-api-access-vtzfk\") pod \"placement-operator-controller-manager-686df47fcb-97x9c\" (UID: \"2bc79302-e5a0-4288-8b2e-ee371eb775a1\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-97x9c" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.594274 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-97x9c"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.604169 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8xkw\" (UniqueName: \"kubernetes.io/projected/bdcf568f-99c9-4432-b763-ce16903da409-kube-api-access-d8xkw\") pod \"neutron-operator-controller-manager-cb4666565-r429h\" (UID: \"bdcf568f-99c9-4432-b763-ce16903da409\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-r429h" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.604499 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-m48zr" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.620024 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtzfk\" (UniqueName: \"kubernetes.io/projected/2bc79302-e5a0-4288-8b2e-ee371eb775a1-kube-api-access-vtzfk\") pod \"placement-operator-controller-manager-686df47fcb-97x9c\" (UID: \"2bc79302-e5a0-4288-8b2e-ee371eb775a1\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-97x9c" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.625446 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57dlp\" (UniqueName: \"kubernetes.io/projected/17d3ffc3-5383-4beb-91d4-db120ddb1c74-kube-api-access-57dlp\") pod \"ovn-operator-controller-manager-55db956ddc-kvhff\" (UID: \"17d3ffc3-5383-4beb-91d4-db120ddb1c74\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-kvhff" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.635440 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.639633 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.644685 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-bxllw" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.661756 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.697419 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-gh9vl"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.700557 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-gh9vl" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.703873 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7vw7j" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.749263 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.782174 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-gh9vl"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.797472 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-8r9cq"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.798362 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-8r9cq" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.802325 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k68zm\" (UniqueName: \"kubernetes.io/projected/c7a6160a-aef5-41af-b1cc-cc2cd97125d7-kube-api-access-k68zm\") pod \"swift-operator-controller-manager-85dd56d4cc-gh9vl\" (UID: \"c7a6160a-aef5-41af-b1cc-cc2cd97125d7\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-gh9vl" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.802398 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdj98\" (UniqueName: \"kubernetes.io/projected/4c4840ab-a9b6-4243-a2f8-e21eaa84f165-kube-api-access-gdj98\") pod \"telemetry-operator-controller-manager-5f8f495fcf-dhcgg\" (UID: \"4c4840ab-a9b6-4243-a2f8-e21eaa84f165\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.803079 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-cxpbk" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.819102 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-kvhff" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.826458 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.839655 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.846155 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-lcmzx" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.862733 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-r429h" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.887365 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-97x9c" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.952808 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.973794 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k68zm\" (UniqueName: \"kubernetes.io/projected/c7a6160a-aef5-41af-b1cc-cc2cd97125d7-kube-api-access-k68zm\") pod \"swift-operator-controller-manager-85dd56d4cc-gh9vl\" (UID: \"c7a6160a-aef5-41af-b1cc-cc2cd97125d7\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-gh9vl" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.973846 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnmlm\" (UniqueName: \"kubernetes.io/projected/be3fcc93-c1a3-4191-8f75-4d8aa5767593-kube-api-access-xnmlm\") pod \"test-operator-controller-manager-7cd8bc9dbb-s6zq8\" (UID: \"be3fcc93-c1a3-4191-8f75-4d8aa5767593\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.973912 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7\" (UID: \"246657ac-def3-41ce-bd99-a8d00d97c86b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:18:54 crc kubenswrapper[4765]: E0121 13:18:54.974043 4765 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:18:54 crc kubenswrapper[4765]: E0121 13:18:54.974095 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert podName:246657ac-def3-41ce-bd99-a8d00d97c86b nodeName:}" failed. No retries permitted until 2026-01-21 13:18:55.974076925 +0000 UTC m=+996.991802747 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" (UID: "246657ac-def3-41ce-bd99-a8d00d97c86b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.974454 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdj98\" (UniqueName: \"kubernetes.io/projected/4c4840ab-a9b6-4243-a2f8-e21eaa84f165-kube-api-access-gdj98\") pod \"telemetry-operator-controller-manager-5f8f495fcf-dhcgg\" (UID: \"4c4840ab-a9b6-4243-a2f8-e21eaa84f165\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg" Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.975134 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-8r9cq"] Jan 21 13:18:54 crc kubenswrapper[4765]: I0121 13:18:54.981409 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgpr4\" (UniqueName: \"kubernetes.io/projected/2d19b122-8cf4-4b4a-8d31-037af2fd65fb-kube-api-access-hgpr4\") pod \"watcher-operator-controller-manager-64cd966744-8r9cq\" (UID: \"2d19b122-8cf4-4b4a-8d31-037af2fd65fb\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-8r9cq" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.082922 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnmlm\" (UniqueName: \"kubernetes.io/projected/be3fcc93-c1a3-4191-8f75-4d8aa5767593-kube-api-access-xnmlm\") pod \"test-operator-controller-manager-7cd8bc9dbb-s6zq8\" (UID: \"be3fcc93-c1a3-4191-8f75-4d8aa5767593\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.083018 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgpr4\" (UniqueName: \"kubernetes.io/projected/2d19b122-8cf4-4b4a-8d31-037af2fd65fb-kube-api-access-hgpr4\") pod \"watcher-operator-controller-manager-64cd966744-8r9cq\" (UID: \"2d19b122-8cf4-4b4a-8d31-037af2fd65fb\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-8r9cq" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.083242 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdj98\" (UniqueName: \"kubernetes.io/projected/4c4840ab-a9b6-4243-a2f8-e21eaa84f165-kube-api-access-gdj98\") pod \"telemetry-operator-controller-manager-5f8f495fcf-dhcgg\" (UID: \"4c4840ab-a9b6-4243-a2f8-e21eaa84f165\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.084010 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k68zm\" (UniqueName: \"kubernetes.io/projected/c7a6160a-aef5-41af-b1cc-cc2cd97125d7-kube-api-access-k68zm\") pod \"swift-operator-controller-manager-85dd56d4cc-gh9vl\" (UID: \"c7a6160a-aef5-41af-b1cc-cc2cd97125d7\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-gh9vl" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.486236 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-gh9vl" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.486791 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.495611 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7"] Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.496963 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.504680 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgpr4\" (UniqueName: \"kubernetes.io/projected/2d19b122-8cf4-4b4a-8d31-037af2fd65fb-kube-api-access-hgpr4\") pod \"watcher-operator-controller-manager-64cd966744-8r9cq\" (UID: \"2d19b122-8cf4-4b4a-8d31-037af2fd65fb\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-8r9cq" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.504759 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnmlm\" (UniqueName: \"kubernetes.io/projected/be3fcc93-c1a3-4191-8f75-4d8aa5767593-kube-api-access-xnmlm\") pod \"test-operator-controller-manager-7cd8bc9dbb-s6zq8\" (UID: \"be3fcc93-c1a3-4191-8f75-4d8aa5767593\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.505186 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert\") pod \"infra-operator-controller-manager-77c48c7859-c74jr\" (UID: \"2962f7bb-1d22-4715-b609-2eb6da1de834\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:18:55 crc kubenswrapper[4765]: E0121 13:18:55.505310 4765 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 13:18:55 crc kubenswrapper[4765]: E0121 13:18:55.505350 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert podName:2962f7bb-1d22-4715-b609-2eb6da1de834 nodeName:}" failed. No retries permitted until 2026-01-21 13:18:57.505337322 +0000 UTC m=+998.523063144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert") pod "infra-operator-controller-manager-77c48c7859-c74jr" (UID: "2962f7bb-1d22-4715-b609-2eb6da1de834") : secret "infra-operator-webhook-server-cert" not found Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.507681 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.507734 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.507758 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bpt7\" (UniqueName: \"kubernetes.io/projected/af5f1c65-c317-4058-9d98-066b866bf83a-kube-api-access-5bpt7\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.513852 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-vqhrz" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.514089 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.514105 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7"] Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.514158 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.526835 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.531243 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ql7j4"] Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.532404 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ql7j4"] Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.532499 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ql7j4" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.546034 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-j8f85" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.609063 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.609365 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.609463 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bpt7\" (UniqueName: \"kubernetes.io/projected/af5f1c65-c317-4058-9d98-066b866bf83a-kube-api-access-5bpt7\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.609562 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dgj2\" (UniqueName: \"kubernetes.io/projected/cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99-kube-api-access-8dgj2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ql7j4\" (UID: \"cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ql7j4" Jan 21 13:18:55 crc kubenswrapper[4765]: E0121 13:18:55.609792 4765 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 13:18:55 crc kubenswrapper[4765]: E0121 13:18:55.609904 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs podName:af5f1c65-c317-4058-9d98-066b866bf83a nodeName:}" failed. No retries permitted until 2026-01-21 13:18:56.109888965 +0000 UTC m=+997.127614787 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs") pod "openstack-operator-controller-manager-75fcf77584-5dfd7" (UID: "af5f1c65-c317-4058-9d98-066b866bf83a") : secret "webhook-server-cert" not found Jan 21 13:18:55 crc kubenswrapper[4765]: E0121 13:18:55.609794 4765 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 13:18:55 crc kubenswrapper[4765]: E0121 13:18:55.610092 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs podName:af5f1c65-c317-4058-9d98-066b866bf83a nodeName:}" failed. No retries permitted until 2026-01-21 13:18:56.110079761 +0000 UTC m=+997.127805583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs") pod "openstack-operator-controller-manager-75fcf77584-5dfd7" (UID: "af5f1c65-c317-4058-9d98-066b866bf83a") : secret "metrics-server-cert" not found Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.633441 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-8r9cq" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.658249 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bpt7\" (UniqueName: \"kubernetes.io/projected/af5f1c65-c317-4058-9d98-066b866bf83a-kube-api-access-5bpt7\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.715387 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dgj2\" (UniqueName: \"kubernetes.io/projected/cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99-kube-api-access-8dgj2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ql7j4\" (UID: \"cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ql7j4" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.880286 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dgj2\" (UniqueName: \"kubernetes.io/projected/cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99-kube-api-access-8dgj2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ql7j4\" (UID: \"cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ql7j4" Jan 21 13:18:55 crc kubenswrapper[4765]: I0121 13:18:55.918055 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ql7j4" Jan 21 13:18:56 crc kubenswrapper[4765]: I0121 13:18:56.034258 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7\" (UID: \"246657ac-def3-41ce-bd99-a8d00d97c86b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:18:56 crc kubenswrapper[4765]: E0121 13:18:56.034425 4765 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:18:56 crc kubenswrapper[4765]: E0121 13:18:56.034484 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert podName:246657ac-def3-41ce-bd99-a8d00d97c86b nodeName:}" failed. No retries permitted until 2026-01-21 13:18:58.034469899 +0000 UTC m=+999.052195721 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" (UID: "246657ac-def3-41ce-bd99-a8d00d97c86b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:18:56 crc kubenswrapper[4765]: I0121 13:18:56.136745 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-kq85p"] Jan 21 13:18:56 crc kubenswrapper[4765]: I0121 13:18:56.137460 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:18:56 crc kubenswrapper[4765]: I0121 13:18:56.137526 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:18:56 crc kubenswrapper[4765]: E0121 13:18:56.137731 4765 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 13:18:56 crc kubenswrapper[4765]: E0121 13:18:56.137806 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs podName:af5f1c65-c317-4058-9d98-066b866bf83a nodeName:}" failed. No retries permitted until 2026-01-21 13:18:57.137787667 +0000 UTC m=+998.155513489 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs") pod "openstack-operator-controller-manager-75fcf77584-5dfd7" (UID: "af5f1c65-c317-4058-9d98-066b866bf83a") : secret "metrics-server-cert" not found Jan 21 13:18:56 crc kubenswrapper[4765]: E0121 13:18:56.137865 4765 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 13:18:56 crc kubenswrapper[4765]: E0121 13:18:56.137892 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs podName:af5f1c65-c317-4058-9d98-066b866bf83a nodeName:}" failed. No retries permitted until 2026-01-21 13:18:57.13788461 +0000 UTC m=+998.155610432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs") pod "openstack-operator-controller-manager-75fcf77584-5dfd7" (UID: "af5f1c65-c317-4058-9d98-066b866bf83a") : secret "webhook-server-cert" not found Jan 21 13:18:56 crc kubenswrapper[4765]: I0121 13:18:56.647767 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-kq85p" event={"ID":"cd5b6743-7a2a-4d03-8adc-952fb87e6f02","Type":"ContainerStarted","Data":"ed96750138a16937d26d54219ec72988a122829a941091b6482c55ffbb2b3a96"} Jan 21 13:18:57 crc kubenswrapper[4765]: I0121 13:18:57.183695 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-65hfk"] Jan 21 13:18:57 crc kubenswrapper[4765]: I0121 13:18:57.211906 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:18:57 crc kubenswrapper[4765]: I0121 13:18:57.212011 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:18:57 crc kubenswrapper[4765]: E0121 13:18:57.212337 4765 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 13:18:57 crc kubenswrapper[4765]: E0121 13:18:57.212941 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs podName:af5f1c65-c317-4058-9d98-066b866bf83a nodeName:}" failed. No retries permitted until 2026-01-21 13:18:59.212378705 +0000 UTC m=+1000.230104517 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs") pod "openstack-operator-controller-manager-75fcf77584-5dfd7" (UID: "af5f1c65-c317-4058-9d98-066b866bf83a") : secret "webhook-server-cert" not found Jan 21 13:18:57 crc kubenswrapper[4765]: E0121 13:18:57.213604 4765 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 13:18:57 crc kubenswrapper[4765]: E0121 13:18:57.213640 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs podName:af5f1c65-c317-4058-9d98-066b866bf83a nodeName:}" failed. No retries permitted until 2026-01-21 13:18:59.21363179 +0000 UTC m=+1000.231357612 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs") pod "openstack-operator-controller-manager-75fcf77584-5dfd7" (UID: "af5f1c65-c317-4058-9d98-066b866bf83a") : secret "metrics-server-cert" not found Jan 21 13:18:57 crc kubenswrapper[4765]: I0121 13:18:57.518471 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert\") pod \"infra-operator-controller-manager-77c48c7859-c74jr\" (UID: \"2962f7bb-1d22-4715-b609-2eb6da1de834\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:18:57 crc kubenswrapper[4765]: E0121 13:18:57.518669 4765 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 13:18:57 crc kubenswrapper[4765]: E0121 13:18:57.518957 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert podName:2962f7bb-1d22-4715-b609-2eb6da1de834 nodeName:}" failed. No retries permitted until 2026-01-21 13:19:01.518940283 +0000 UTC m=+1002.536666105 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert") pod "infra-operator-controller-manager-77c48c7859-c74jr" (UID: "2962f7bb-1d22-4715-b609-2eb6da1de834") : secret "infra-operator-webhook-server-cert" not found Jan 21 13:18:57 crc kubenswrapper[4765]: I0121 13:18:57.657176 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-dgbtx"] Jan 21 13:18:57 crc kubenswrapper[4765]: I0121 13:18:57.666982 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-848df65fbb-79lv9"] Jan 21 13:18:57 crc kubenswrapper[4765]: I0121 13:18:57.685708 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-65hfk" event={"ID":"4c92e105-ba8b-4828-bc30-857c5431672f","Type":"ContainerStarted","Data":"5cc85ee32998aaf84b6b1dfff0a21ae8c07c51ee760474d96a8256ada9685e4a"} Jan 21 13:18:57 crc kubenswrapper[4765]: I0121 13:18:57.691110 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dgbtx" event={"ID":"079ac5a2-3654-48e8-8bf0-597018fc2ca5","Type":"ContainerStarted","Data":"518517dc459e9525499bd9b638e2a5c1e3ea03c372c1e1de2774628dc0f3df9c"} Jan 21 13:18:57 crc kubenswrapper[4765]: I0121 13:18:57.704155 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-8pvpr"] Jan 21 13:18:57 crc kubenswrapper[4765]: I0121 13:18:57.718650 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-8kq4g"] Jan 21 13:18:57 crc kubenswrapper[4765]: I0121 13:18:57.755658 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hm7zd" podStartSLOduration=3.247282312 podStartE2EDuration="8.755636074s" podCreationTimestamp="2026-01-21 13:18:49 +0000 UTC" firstStartedPulling="2026-01-21 13:18:51.307973741 +0000 UTC m=+992.325699563" lastFinishedPulling="2026-01-21 13:18:56.816327503 +0000 UTC m=+997.834053325" observedRunningTime="2026-01-21 13:18:57.744519867 +0000 UTC m=+998.762245689" watchObservedRunningTime="2026-01-21 13:18:57.755636074 +0000 UTC m=+998.773361896" Jan 21 13:18:57 crc kubenswrapper[4765]: I0121 13:18:57.782343 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-t42c2"] Jan 21 13:18:57 crc kubenswrapper[4765]: I0121 13:18:57.843813 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-rk4x7"] Jan 21 13:18:57 crc kubenswrapper[4765]: I0121 13:18:57.911878 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-r429h"] Jan 21 13:18:57 crc kubenswrapper[4765]: I0121 13:18:57.919537 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-hv2dn"] Jan 21 13:18:57 crc kubenswrapper[4765]: W0121 13:18:57.936483 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdcf568f_99c9_4432_b763_ce16903da409.slice/crio-ae86fd1767e4d365cb243297cb055f042c89152f3acbe5084261940685b31687 WatchSource:0}: Error finding container ae86fd1767e4d365cb243297cb055f042c89152f3acbe5084261940685b31687: Status 404 returned error can't find the container with id ae86fd1767e4d365cb243297cb055f042c89152f3acbe5084261940685b31687 Jan 21 13:18:57 crc kubenswrapper[4765]: W0121 13:18:57.936897 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30a8ff01_0173_45a7_9460_9df64146234d.slice/crio-8675c48be1f3bf074a844ea14cd329c1acb39aad604c178ee607601ff13e643c WatchSource:0}: Error finding container 8675c48be1f3bf074a844ea14cd329c1acb39aad604c178ee607601ff13e643c: Status 404 returned error can't find the container with id 8675c48be1f3bf074a844ea14cd329c1acb39aad604c178ee607601ff13e643c Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.127838 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7\" (UID: \"246657ac-def3-41ce-bd99-a8d00d97c86b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:18:58 crc kubenswrapper[4765]: E0121 13:18:58.128003 4765 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:18:58 crc kubenswrapper[4765]: E0121 13:18:58.128053 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert podName:246657ac-def3-41ce-bd99-a8d00d97c86b nodeName:}" failed. No retries permitted until 2026-01-21 13:19:02.128037834 +0000 UTC m=+1003.145763656 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" (UID: "246657ac-def3-41ce-bd99-a8d00d97c86b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.367724 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-gh9vl"] Jan 21 13:18:58 crc kubenswrapper[4765]: W0121 13:18:58.386405 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7a6160a_aef5_41af_b1cc_cc2cd97125d7.slice/crio-2c094f7a897f18571b027a4d245a74101e698a705d6898731d47833089cc9864 WatchSource:0}: Error finding container 2c094f7a897f18571b027a4d245a74101e698a705d6898731d47833089cc9864: Status 404 returned error can't find the container with id 2c094f7a897f18571b027a4d245a74101e698a705d6898731d47833089cc9864 Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.405062 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-kvhff"] Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.419111 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ql7j4"] Jan 21 13:18:58 crc kubenswrapper[4765]: W0121 13:18:58.427434 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17d3ffc3_5383_4beb_91d4_db120ddb1c74.slice/crio-a034a660f70f3a5174acf73967808282343a2f3aa38592585d1a466927a980a8 WatchSource:0}: Error finding container a034a660f70f3a5174acf73967808282343a2f3aa38592585d1a466927a980a8: Status 404 returned error can't find the container with id a034a660f70f3a5174acf73967808282343a2f3aa38592585d1a466927a980a8 Jan 21 13:18:58 crc kubenswrapper[4765]: W0121 13:18:58.427928 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcead9f09_f8d9_4cf3_960e_eb1ba8f1fa99.slice/crio-35c1be7b579606f39d10568aae156675d64243725a84ac9163ba5f8383929383 WatchSource:0}: Error finding container 35c1be7b579606f39d10568aae156675d64243725a84ac9163ba5f8383929383: Status 404 returned error can't find the container with id 35c1be7b579606f39d10568aae156675d64243725a84ac9163ba5f8383929383 Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.436791 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-8r9cq"] Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.436879 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-m48zr"] Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.446440 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb"] Jan 21 13:18:58 crc kubenswrapper[4765]: W0121 13:18:58.458795 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod953ef395_07f2_4b90_8232_77b94a176094.slice/crio-185c04c4c71aa8c4cc732a375714b4b7d88d2d21b9369601559df6d2bb2ac75b WatchSource:0}: Error finding container 185c04c4c71aa8c4cc732a375714b4b7d88d2d21b9369601559df6d2bb2ac75b: Status 404 returned error can't find the container with id 185c04c4c71aa8c4cc732a375714b4b7d88d2d21b9369601559df6d2bb2ac75b Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.463473 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg"] Jan 21 13:18:58 crc kubenswrapper[4765]: W0121 13:18:58.469366 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d19b122_8cf4_4b4a_8d31_037af2fd65fb.slice/crio-d8816d0b98cc4cc9fa8380f7a0488f54cdc9af9c102c973b39b68bc4dbf26bb0 WatchSource:0}: Error finding container d8816d0b98cc4cc9fa8380f7a0488f54cdc9af9c102c973b39b68bc4dbf26bb0: Status 404 returned error can't find the container with id d8816d0b98cc4cc9fa8380f7a0488f54cdc9af9c102c973b39b68bc4dbf26bb0 Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.481386 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-97x9c"] Jan 21 13:18:58 crc kubenswrapper[4765]: W0121 13:18:58.484140 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bc79302_e5a0_4288_8b2e_ee371eb775a1.slice/crio-458bc063a3897c50c2ae4734c833be834bb70ef2e6f70c8e1c64f40fb4e9f5dc WatchSource:0}: Error finding container 458bc063a3897c50c2ae4734c833be834bb70ef2e6f70c8e1c64f40fb4e9f5dc: Status 404 returned error can't find the container with id 458bc063a3897c50c2ae4734c833be834bb70ef2e6f70c8e1c64f40fb4e9f5dc Jan 21 13:18:58 crc kubenswrapper[4765]: W0121 13:18:58.493320 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c4840ab_a9b6_4243_a2f8_e21eaa84f165.slice/crio-076095629ded3713a48135a48f3dcce9013f0d87cd5ed68f4e28240c49671a37 WatchSource:0}: Error finding container 076095629ded3713a48135a48f3dcce9013f0d87cd5ed68f4e28240c49671a37: Status 404 returned error can't find the container with id 076095629ded3713a48135a48f3dcce9013f0d87cd5ed68f4e28240c49671a37 Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.499118 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677"] Jan 21 13:18:58 crc kubenswrapper[4765]: W0121 13:18:58.500439 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe3fcc93_c1a3_4191_8f75_4d8aa5767593.slice/crio-68dce49fbec6c97548291fbafc4cd062eae06fc84d1886f5c8513263f32e9622 WatchSource:0}: Error finding container 68dce49fbec6c97548291fbafc4cd062eae06fc84d1886f5c8513263f32e9622: Status 404 returned error can't find the container with id 68dce49fbec6c97548291fbafc4cd062eae06fc84d1886f5c8513263f32e9622 Jan 21 13:18:58 crc kubenswrapper[4765]: E0121 13:18:58.500786 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gdj98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-dhcgg_openstack-operators(4c4840ab-a9b6-4243-a2f8-e21eaa84f165): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 13:18:58 crc kubenswrapper[4765]: E0121 13:18:58.501068 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wxj7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-kh677_openstack-operators(882965e2-7eb0-4971-9770-e750a8fe36dc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 13:18:58 crc kubenswrapper[4765]: E0121 13:18:58.501870 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg" podUID="4c4840ab-a9b6-4243-a2f8-e21eaa84f165" Jan 21 13:18:58 crc kubenswrapper[4765]: E0121 13:18:58.502537 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677" podUID="882965e2-7eb0-4971-9770-e750a8fe36dc" Jan 21 13:18:58 crc kubenswrapper[4765]: E0121 13:18:58.502812 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xnmlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-s6zq8_openstack-operators(be3fcc93-c1a3-4191-8f75-4d8aa5767593): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 13:18:58 crc kubenswrapper[4765]: E0121 13:18:58.502900 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pfn99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-rxxvb_openstack-operators(c78d0245-2ac0-4576-860f-20c8ad7f7fa3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 13:18:58 crc kubenswrapper[4765]: E0121 13:18:58.503956 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8" podUID="be3fcc93-c1a3-4191-8f75-4d8aa5767593" Jan 21 13:18:58 crc kubenswrapper[4765]: E0121 13:18:58.504004 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb" podUID="c78d0245-2ac0-4576-860f-20c8ad7f7fa3" Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.511487 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8"] Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.725629 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-gh9vl" event={"ID":"c7a6160a-aef5-41af-b1cc-cc2cd97125d7","Type":"ContainerStarted","Data":"2c094f7a897f18571b027a4d245a74101e698a705d6898731d47833089cc9864"} Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.728116 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-848df65fbb-79lv9" event={"ID":"448c57b9-0176-42e1-a493-609bc853db01","Type":"ContainerStarted","Data":"687e869783024fb3c8c8b8098edd2017a7416cb9c5613e8d97de54f425f20a46"} Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.735388 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8kq4g" event={"ID":"ecd5f054-6284-485a-8c41-6b2338a5c0f4","Type":"ContainerStarted","Data":"7e6a846881705fdb95b971685766c8bf9e5d22155e0a4191e2c1384fab89e8f7"} Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.741863 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rk4x7" event={"ID":"2a3c28ee-e170-4592-8291-db76c15675d1","Type":"ContainerStarted","Data":"47689c611695cbf446b10c9e042f5e006e6fb2c8b09dc0afd39f186890997305"} Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.743584 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb" event={"ID":"c78d0245-2ac0-4576-860f-20c8ad7f7fa3","Type":"ContainerStarted","Data":"673c4c2a5e3818bc35e91b19a967dc3365c307941e9faccd632c12d17aff0be6"} Jan 21 13:18:58 crc kubenswrapper[4765]: E0121 13:18:58.746831 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb" podUID="c78d0245-2ac0-4576-860f-20c8ad7f7fa3" Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.748866 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-hv2dn" event={"ID":"30a8ff01-0173-45a7-9460-9df64146234d","Type":"ContainerStarted","Data":"8675c48be1f3bf074a844ea14cd329c1acb39aad604c178ee607601ff13e643c"} Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.755716 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-97x9c" event={"ID":"2bc79302-e5a0-4288-8b2e-ee371eb775a1","Type":"ContainerStarted","Data":"458bc063a3897c50c2ae4734c833be834bb70ef2e6f70c8e1c64f40fb4e9f5dc"} Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.767750 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-m48zr" event={"ID":"953ef395-07f2-4b90-8232-77b94a176094","Type":"ContainerStarted","Data":"185c04c4c71aa8c4cc732a375714b4b7d88d2d21b9369601559df6d2bb2ac75b"} Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.775660 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677" event={"ID":"882965e2-7eb0-4971-9770-e750a8fe36dc","Type":"ContainerStarted","Data":"890034a1131ab2cf0f05b8ec860aca4dce5fc3d0f115869c941b52e4adf9bb49"} Jan 21 13:18:58 crc kubenswrapper[4765]: E0121 13:18:58.777630 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677" podUID="882965e2-7eb0-4971-9770-e750a8fe36dc" Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.778702 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg" event={"ID":"4c4840ab-a9b6-4243-a2f8-e21eaa84f165","Type":"ContainerStarted","Data":"076095629ded3713a48135a48f3dcce9013f0d87cd5ed68f4e28240c49671a37"} Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.780875 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ql7j4" event={"ID":"cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99","Type":"ContainerStarted","Data":"35c1be7b579606f39d10568aae156675d64243725a84ac9163ba5f8383929383"} Jan 21 13:18:58 crc kubenswrapper[4765]: E0121 13:18:58.783886 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg" podUID="4c4840ab-a9b6-4243-a2f8-e21eaa84f165" Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.784357 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-kvhff" event={"ID":"17d3ffc3-5383-4beb-91d4-db120ddb1c74","Type":"ContainerStarted","Data":"a034a660f70f3a5174acf73967808282343a2f3aa38592585d1a466927a980a8"} Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.839251 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-8pvpr" event={"ID":"ab7eaa76-7a22-4d3c-85a3-9b643832d707","Type":"ContainerStarted","Data":"66e16cb073a02af2df5528a3ee4686389ad34c6667c3fdf5aabede63b5fa94e4"} Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.845982 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-8r9cq" event={"ID":"2d19b122-8cf4-4b4a-8d31-037af2fd65fb","Type":"ContainerStarted","Data":"d8816d0b98cc4cc9fa8380f7a0488f54cdc9af9c102c973b39b68bc4dbf26bb0"} Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.849802 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8" event={"ID":"be3fcc93-c1a3-4191-8f75-4d8aa5767593","Type":"ContainerStarted","Data":"68dce49fbec6c97548291fbafc4cd062eae06fc84d1886f5c8513263f32e9622"} Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.856112 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-r429h" event={"ID":"bdcf568f-99c9-4432-b763-ce16903da409","Type":"ContainerStarted","Data":"ae86fd1767e4d365cb243297cb055f042c89152f3acbe5084261940685b31687"} Jan 21 13:18:58 crc kubenswrapper[4765]: E0121 13:18:58.857537 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8" podUID="be3fcc93-c1a3-4191-8f75-4d8aa5767593" Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.862190 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-t42c2" event={"ID":"00c36135-159f-43be-be7c-b4f01cf2ace7","Type":"ContainerStarted","Data":"80d23ee4be78144ea8ab227db7371fe3bf1af4a6fcb286d7e94ed9eb729c75e9"} Jan 21 13:18:58 crc kubenswrapper[4765]: I0121 13:18:58.873139 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm7zd" event={"ID":"9528dde3-6eb5-4247-84e7-945a4fa7083b","Type":"ContainerStarted","Data":"4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1"} Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.042243 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-btvw5"] Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.043773 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.051802 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btvw5"] Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.159678 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00652994-f3cc-4cd8-946c-670c24b0e8a7-utilities\") pod \"redhat-marketplace-btvw5\" (UID: \"00652994-f3cc-4cd8-946c-670c24b0e8a7\") " pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.159742 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00652994-f3cc-4cd8-946c-670c24b0e8a7-catalog-content\") pod \"redhat-marketplace-btvw5\" (UID: \"00652994-f3cc-4cd8-946c-670c24b0e8a7\") " pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.159793 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz9f8\" (UniqueName: \"kubernetes.io/projected/00652994-f3cc-4cd8-946c-670c24b0e8a7-kube-api-access-xz9f8\") pod \"redhat-marketplace-btvw5\" (UID: \"00652994-f3cc-4cd8-946c-670c24b0e8a7\") " pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.263656 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.263726 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.263825 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00652994-f3cc-4cd8-946c-670c24b0e8a7-catalog-content\") pod \"redhat-marketplace-btvw5\" (UID: \"00652994-f3cc-4cd8-946c-670c24b0e8a7\") " pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.263846 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00652994-f3cc-4cd8-946c-670c24b0e8a7-utilities\") pod \"redhat-marketplace-btvw5\" (UID: \"00652994-f3cc-4cd8-946c-670c24b0e8a7\") " pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.263877 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz9f8\" (UniqueName: \"kubernetes.io/projected/00652994-f3cc-4cd8-946c-670c24b0e8a7-kube-api-access-xz9f8\") pod \"redhat-marketplace-btvw5\" (UID: \"00652994-f3cc-4cd8-946c-670c24b0e8a7\") " pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:18:59 crc kubenswrapper[4765]: E0121 13:18:59.263887 4765 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 13:18:59 crc kubenswrapper[4765]: E0121 13:18:59.263974 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs podName:af5f1c65-c317-4058-9d98-066b866bf83a nodeName:}" failed. No retries permitted until 2026-01-21 13:19:03.263950686 +0000 UTC m=+1004.281676608 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs") pod "openstack-operator-controller-manager-75fcf77584-5dfd7" (UID: "af5f1c65-c317-4058-9d98-066b866bf83a") : secret "webhook-server-cert" not found Jan 21 13:18:59 crc kubenswrapper[4765]: E0121 13:18:59.264305 4765 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 13:18:59 crc kubenswrapper[4765]: E0121 13:18:59.264369 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs podName:af5f1c65-c317-4058-9d98-066b866bf83a nodeName:}" failed. No retries permitted until 2026-01-21 13:19:03.264350407 +0000 UTC m=+1004.282076289 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs") pod "openstack-operator-controller-manager-75fcf77584-5dfd7" (UID: "af5f1c65-c317-4058-9d98-066b866bf83a") : secret "metrics-server-cert" not found Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.264953 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00652994-f3cc-4cd8-946c-670c24b0e8a7-catalog-content\") pod \"redhat-marketplace-btvw5\" (UID: \"00652994-f3cc-4cd8-946c-670c24b0e8a7\") " pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.265169 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00652994-f3cc-4cd8-946c-670c24b0e8a7-utilities\") pod \"redhat-marketplace-btvw5\" (UID: \"00652994-f3cc-4cd8-946c-670c24b0e8a7\") " pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.304953 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz9f8\" (UniqueName: \"kubernetes.io/projected/00652994-f3cc-4cd8-946c-670c24b0e8a7-kube-api-access-xz9f8\") pod \"redhat-marketplace-btvw5\" (UID: \"00652994-f3cc-4cd8-946c-670c24b0e8a7\") " pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.385048 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.722556 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:18:59 crc kubenswrapper[4765]: I0121 13:18:59.722942 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:18:59 crc kubenswrapper[4765]: E0121 13:18:59.910377 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8" podUID="be3fcc93-c1a3-4191-8f75-4d8aa5767593" Jan 21 13:18:59 crc kubenswrapper[4765]: E0121 13:18:59.910769 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg" podUID="4c4840ab-a9b6-4243-a2f8-e21eaa84f165" Jan 21 13:18:59 crc kubenswrapper[4765]: E0121 13:18:59.910814 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677" podUID="882965e2-7eb0-4971-9770-e750a8fe36dc" Jan 21 13:18:59 crc kubenswrapper[4765]: E0121 13:18:59.922544 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb" podUID="c78d0245-2ac0-4576-860f-20c8ad7f7fa3" Jan 21 13:19:00 crc kubenswrapper[4765]: I0121 13:19:00.147068 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btvw5"] Jan 21 13:19:00 crc kubenswrapper[4765]: W0121 13:19:00.185809 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00652994_f3cc_4cd8_946c_670c24b0e8a7.slice/crio-c1567f293f1b144d8580f4b900d7bcfde0b56462ecb1cfb15d8995f3d3ddb861 WatchSource:0}: Error finding container c1567f293f1b144d8580f4b900d7bcfde0b56462ecb1cfb15d8995f3d3ddb861: Status 404 returned error can't find the container with id c1567f293f1b144d8580f4b900d7bcfde0b56462ecb1cfb15d8995f3d3ddb861 Jan 21 13:19:00 crc kubenswrapper[4765]: I0121 13:19:00.858333 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hm7zd" podUID="9528dde3-6eb5-4247-84e7-945a4fa7083b" containerName="registry-server" probeResult="failure" output=< Jan 21 13:19:00 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Jan 21 13:19:00 crc kubenswrapper[4765]: > Jan 21 13:19:00 crc kubenswrapper[4765]: I0121 13:19:00.919941 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btvw5" event={"ID":"00652994-f3cc-4cd8-946c-670c24b0e8a7","Type":"ContainerStarted","Data":"c1567f293f1b144d8580f4b900d7bcfde0b56462ecb1cfb15d8995f3d3ddb861"} Jan 21 13:19:01 crc kubenswrapper[4765]: I0121 13:19:01.548587 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert\") pod \"infra-operator-controller-manager-77c48c7859-c74jr\" (UID: \"2962f7bb-1d22-4715-b609-2eb6da1de834\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:19:01 crc kubenswrapper[4765]: E0121 13:19:01.549121 4765 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 13:19:01 crc kubenswrapper[4765]: E0121 13:19:01.549182 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert podName:2962f7bb-1d22-4715-b609-2eb6da1de834 nodeName:}" failed. No retries permitted until 2026-01-21 13:19:09.549164001 +0000 UTC m=+1010.566889823 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert") pod "infra-operator-controller-manager-77c48c7859-c74jr" (UID: "2962f7bb-1d22-4715-b609-2eb6da1de834") : secret "infra-operator-webhook-server-cert" not found Jan 21 13:19:01 crc kubenswrapper[4765]: I0121 13:19:01.951906 4765 generic.go:334] "Generic (PLEG): container finished" podID="00652994-f3cc-4cd8-946c-670c24b0e8a7" containerID="525e7c833be0c9aaecaa3bb143bb8b3e85d3f4f3f3a988497d9a304c34f0453f" exitCode=0 Jan 21 13:19:01 crc kubenswrapper[4765]: I0121 13:19:01.951952 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btvw5" event={"ID":"00652994-f3cc-4cd8-946c-670c24b0e8a7","Type":"ContainerDied","Data":"525e7c833be0c9aaecaa3bb143bb8b3e85d3f4f3f3a988497d9a304c34f0453f"} Jan 21 13:19:02 crc kubenswrapper[4765]: I0121 13:19:02.232497 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7\" (UID: \"246657ac-def3-41ce-bd99-a8d00d97c86b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:19:02 crc kubenswrapper[4765]: E0121 13:19:02.232736 4765 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:19:02 crc kubenswrapper[4765]: E0121 13:19:02.232799 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert podName:246657ac-def3-41ce-bd99-a8d00d97c86b nodeName:}" failed. No retries permitted until 2026-01-21 13:19:10.232780491 +0000 UTC m=+1011.250506313 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" (UID: "246657ac-def3-41ce-bd99-a8d00d97c86b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:19:03 crc kubenswrapper[4765]: I0121 13:19:03.342334 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:19:03 crc kubenswrapper[4765]: I0121 13:19:03.342797 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:19:03 crc kubenswrapper[4765]: E0121 13:19:03.342960 4765 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 13:19:03 crc kubenswrapper[4765]: E0121 13:19:03.343063 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs podName:af5f1c65-c317-4058-9d98-066b866bf83a nodeName:}" failed. No retries permitted until 2026-01-21 13:19:11.34303937 +0000 UTC m=+1012.360765192 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs") pod "openstack-operator-controller-manager-75fcf77584-5dfd7" (UID: "af5f1c65-c317-4058-9d98-066b866bf83a") : secret "webhook-server-cert" not found Jan 21 13:19:03 crc kubenswrapper[4765]: E0121 13:19:03.344584 4765 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 13:19:03 crc kubenswrapper[4765]: E0121 13:19:03.344688 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs podName:af5f1c65-c317-4058-9d98-066b866bf83a nodeName:}" failed. No retries permitted until 2026-01-21 13:19:11.344667956 +0000 UTC m=+1012.362393778 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs") pod "openstack-operator-controller-manager-75fcf77584-5dfd7" (UID: "af5f1c65-c317-4058-9d98-066b866bf83a") : secret "metrics-server-cert" not found Jan 21 13:19:09 crc kubenswrapper[4765]: I0121 13:19:09.616359 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert\") pod \"infra-operator-controller-manager-77c48c7859-c74jr\" (UID: \"2962f7bb-1d22-4715-b609-2eb6da1de834\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:19:09 crc kubenswrapper[4765]: E0121 13:19:09.616525 4765 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 13:19:09 crc kubenswrapper[4765]: E0121 13:19:09.616870 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert podName:2962f7bb-1d22-4715-b609-2eb6da1de834 nodeName:}" failed. No retries permitted until 2026-01-21 13:19:25.616850392 +0000 UTC m=+1026.634576214 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert") pod "infra-operator-controller-manager-77c48c7859-c74jr" (UID: "2962f7bb-1d22-4715-b609-2eb6da1de834") : secret "infra-operator-webhook-server-cert" not found Jan 21 13:19:09 crc kubenswrapper[4765]: I0121 13:19:09.778357 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:19:09 crc kubenswrapper[4765]: I0121 13:19:09.833519 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:19:10 crc kubenswrapper[4765]: I0121 13:19:10.023709 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hm7zd"] Jan 21 13:19:10 crc kubenswrapper[4765]: I0121 13:19:10.233864 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7\" (UID: \"246657ac-def3-41ce-bd99-a8d00d97c86b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:19:10 crc kubenswrapper[4765]: E0121 13:19:10.234057 4765 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:19:10 crc kubenswrapper[4765]: E0121 13:19:10.234122 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert podName:246657ac-def3-41ce-bd99-a8d00d97c86b nodeName:}" failed. No retries permitted until 2026-01-21 13:19:26.234105656 +0000 UTC m=+1027.251831478 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" (UID: "246657ac-def3-41ce-bd99-a8d00d97c86b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 13:19:11 crc kubenswrapper[4765]: I0121 13:19:11.289588 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hm7zd" podUID="9528dde3-6eb5-4247-84e7-945a4fa7083b" containerName="registry-server" containerID="cri-o://4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1" gracePeriod=2 Jan 21 13:19:11 crc kubenswrapper[4765]: I0121 13:19:11.348881 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:19:11 crc kubenswrapper[4765]: I0121 13:19:11.350790 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:19:11 crc kubenswrapper[4765]: E0121 13:19:11.351091 4765 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 13:19:11 crc kubenswrapper[4765]: E0121 13:19:11.351176 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs podName:af5f1c65-c317-4058-9d98-066b866bf83a nodeName:}" failed. No retries permitted until 2026-01-21 13:19:27.351152472 +0000 UTC m=+1028.368878294 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs") pod "openstack-operator-controller-manager-75fcf77584-5dfd7" (UID: "af5f1c65-c317-4058-9d98-066b866bf83a") : secret "metrics-server-cert" not found Jan 21 13:19:11 crc kubenswrapper[4765]: I0121 13:19:11.366696 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-webhook-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:19:12 crc kubenswrapper[4765]: I0121 13:19:12.499748 4765 generic.go:334] "Generic (PLEG): container finished" podID="9528dde3-6eb5-4247-84e7-945a4fa7083b" containerID="4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1" exitCode=0 Jan 21 13:19:12 crc kubenswrapper[4765]: I0121 13:19:12.500273 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm7zd" event={"ID":"9528dde3-6eb5-4247-84e7-945a4fa7083b","Type":"ContainerDied","Data":"4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1"} Jan 21 13:19:13 crc kubenswrapper[4765]: E0121 13:19:13.874648 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 21 13:19:13 crc kubenswrapper[4765]: E0121 13:19:13.875094 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xlwxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-t42c2_openstack-operators(00c36135-159f-43be-be7c-b4f01cf2ace7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:13 crc kubenswrapper[4765]: E0121 13:19:13.876609 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-t42c2" podUID="00c36135-159f-43be-be7c-b4f01cf2ace7" Jan 21 13:19:14 crc kubenswrapper[4765]: I0121 13:19:14.446045 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:19:14 crc kubenswrapper[4765]: I0121 13:19:14.446106 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:19:14 crc kubenswrapper[4765]: I0121 13:19:14.446149 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:19:14 crc kubenswrapper[4765]: I0121 13:19:14.446781 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3163e8db45db8b9601f45b03cbef2661d131b6e749b48c66d1778284a24a76c2"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:19:14 crc kubenswrapper[4765]: I0121 13:19:14.446862 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://3163e8db45db8b9601f45b03cbef2661d131b6e749b48c66d1778284a24a76c2" gracePeriod=600 Jan 21 13:19:14 crc kubenswrapper[4765]: E0121 13:19:14.692848 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-t42c2" podUID="00c36135-159f-43be-be7c-b4f01cf2ace7" Jan 21 13:19:15 crc kubenswrapper[4765]: I0121 13:19:15.711335 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="3163e8db45db8b9601f45b03cbef2661d131b6e749b48c66d1778284a24a76c2" exitCode=0 Jan 21 13:19:15 crc kubenswrapper[4765]: I0121 13:19:15.711401 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"3163e8db45db8b9601f45b03cbef2661d131b6e749b48c66d1778284a24a76c2"} Jan 21 13:19:15 crc kubenswrapper[4765]: I0121 13:19:15.711443 4765 scope.go:117] "RemoveContainer" containerID="f52e9baa6469e50f020fdb819604c74920e3021231bce0736ea82e11d2f65248" Jan 21 13:19:15 crc kubenswrapper[4765]: E0121 13:19:15.860838 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525" Jan 21 13:19:15 crc kubenswrapper[4765]: E0121 13:19:15.861027 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7ddbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-78757b4889-rk4x7_openstack-operators(2a3c28ee-e170-4592-8291-db76c15675d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:15 crc kubenswrapper[4765]: E0121 13:19:15.862841 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rk4x7" podUID="2a3c28ee-e170-4592-8291-db76c15675d1" Jan 21 13:19:16 crc kubenswrapper[4765]: E0121 13:19:16.725225 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rk4x7" podUID="2a3c28ee-e170-4592-8291-db76c15675d1" Jan 21 13:19:17 crc kubenswrapper[4765]: E0121 13:19:17.584154 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 21 13:19:17 crc kubenswrapper[4765]: E0121 13:19:17.584452 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lfrct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-8pvpr_openstack-operators(ab7eaa76-7a22-4d3c-85a3-9b643832d707): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:17 crc kubenswrapper[4765]: E0121 13:19:17.585686 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-8pvpr" podUID="ab7eaa76-7a22-4d3c-85a3-9b643832d707" Jan 21 13:19:17 crc kubenswrapper[4765]: E0121 13:19:17.732235 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-8pvpr" podUID="ab7eaa76-7a22-4d3c-85a3-9b643832d707" Jan 21 13:19:18 crc kubenswrapper[4765]: I0121 13:19:18.696889 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4wvkw"] Jan 21 13:19:18 crc kubenswrapper[4765]: I0121 13:19:18.698958 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:19:18 crc kubenswrapper[4765]: I0121 13:19:18.713838 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wvkw"] Jan 21 13:19:18 crc kubenswrapper[4765]: I0121 13:19:18.834039 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47229e54-b901-49f9-9cf8-25f65374d9ee-catalog-content\") pod \"certified-operators-4wvkw\" (UID: \"47229e54-b901-49f9-9cf8-25f65374d9ee\") " pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:19:18 crc kubenswrapper[4765]: I0121 13:19:18.834108 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvdmw\" (UniqueName: \"kubernetes.io/projected/47229e54-b901-49f9-9cf8-25f65374d9ee-kube-api-access-jvdmw\") pod \"certified-operators-4wvkw\" (UID: \"47229e54-b901-49f9-9cf8-25f65374d9ee\") " pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:19:18 crc kubenswrapper[4765]: I0121 13:19:18.834134 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47229e54-b901-49f9-9cf8-25f65374d9ee-utilities\") pod \"certified-operators-4wvkw\" (UID: \"47229e54-b901-49f9-9cf8-25f65374d9ee\") " pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:19:18 crc kubenswrapper[4765]: I0121 13:19:18.935333 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47229e54-b901-49f9-9cf8-25f65374d9ee-catalog-content\") pod \"certified-operators-4wvkw\" (UID: \"47229e54-b901-49f9-9cf8-25f65374d9ee\") " pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:19:18 crc kubenswrapper[4765]: I0121 13:19:18.935698 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvdmw\" (UniqueName: \"kubernetes.io/projected/47229e54-b901-49f9-9cf8-25f65374d9ee-kube-api-access-jvdmw\") pod \"certified-operators-4wvkw\" (UID: \"47229e54-b901-49f9-9cf8-25f65374d9ee\") " pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:19:18 crc kubenswrapper[4765]: I0121 13:19:18.935832 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47229e54-b901-49f9-9cf8-25f65374d9ee-utilities\") pod \"certified-operators-4wvkw\" (UID: \"47229e54-b901-49f9-9cf8-25f65374d9ee\") " pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:19:18 crc kubenswrapper[4765]: I0121 13:19:18.935958 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47229e54-b901-49f9-9cf8-25f65374d9ee-catalog-content\") pod \"certified-operators-4wvkw\" (UID: \"47229e54-b901-49f9-9cf8-25f65374d9ee\") " pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:19:18 crc kubenswrapper[4765]: I0121 13:19:18.936312 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47229e54-b901-49f9-9cf8-25f65374d9ee-utilities\") pod \"certified-operators-4wvkw\" (UID: \"47229e54-b901-49f9-9cf8-25f65374d9ee\") " pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:19:18 crc kubenswrapper[4765]: I0121 13:19:18.954453 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvdmw\" (UniqueName: \"kubernetes.io/projected/47229e54-b901-49f9-9cf8-25f65374d9ee-kube-api-access-jvdmw\") pod \"certified-operators-4wvkw\" (UID: \"47229e54-b901-49f9-9cf8-25f65374d9ee\") " pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:19:19 crc kubenswrapper[4765]: I0121 13:19:19.016850 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:19:19 crc kubenswrapper[4765]: E0121 13:19:19.131893 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028" Jan 21 13:19:19 crc kubenswrapper[4765]: E0121 13:19:19.132369 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nhrcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-c6994669c-65hfk_openstack-operators(4c92e105-ba8b-4828-bc30-857c5431672f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:19 crc kubenswrapper[4765]: E0121 13:19:19.138083 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-65hfk" podUID="4c92e105-ba8b-4828-bc30-857c5431672f" Jan 21 13:19:19 crc kubenswrapper[4765]: E0121 13:19:19.722008 4765 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1 is running failed: container process not found" containerID="4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 13:19:19 crc kubenswrapper[4765]: E0121 13:19:19.722338 4765 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1 is running failed: container process not found" containerID="4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 13:19:19 crc kubenswrapper[4765]: E0121 13:19:19.722582 4765 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1 is running failed: container process not found" containerID="4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 13:19:19 crc kubenswrapper[4765]: E0121 13:19:19.722607 4765 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-hm7zd" podUID="9528dde3-6eb5-4247-84e7-945a4fa7083b" containerName="registry-server" Jan 21 13:19:19 crc kubenswrapper[4765]: E0121 13:19:19.745898 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028\\\"\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-65hfk" podUID="4c92e105-ba8b-4828-bc30-857c5431672f" Jan 21 13:19:20 crc kubenswrapper[4765]: E0121 13:19:20.239619 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad" Jan 21 13:19:20 crc kubenswrapper[4765]: E0121 13:19:20.240093 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hgpr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-8r9cq_openstack-operators(2d19b122-8cf4-4b4a-8d31-037af2fd65fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:20 crc kubenswrapper[4765]: E0121 13:19:20.241528 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-8r9cq" podUID="2d19b122-8cf4-4b4a-8d31-037af2fd65fb" Jan 21 13:19:20 crc kubenswrapper[4765]: E0121 13:19:20.750275 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-8r9cq" podUID="2d19b122-8cf4-4b4a-8d31-037af2fd65fb" Jan 21 13:19:22 crc kubenswrapper[4765]: E0121 13:19:22.667111 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737" Jan 21 13:19:22 crc kubenswrapper[4765]: E0121 13:19:22.667373 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vtzfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-686df47fcb-97x9c_openstack-operators(2bc79302-e5a0-4288-8b2e-ee371eb775a1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:22 crc kubenswrapper[4765]: E0121 13:19:22.668617 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-97x9c" podUID="2bc79302-e5a0-4288-8b2e-ee371eb775a1" Jan 21 13:19:22 crc kubenswrapper[4765]: E0121 13:19:22.762719 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-97x9c" podUID="2bc79302-e5a0-4288-8b2e-ee371eb775a1" Jan 21 13:19:25 crc kubenswrapper[4765]: I0121 13:19:25.616663 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:19:25 crc kubenswrapper[4765]: I0121 13:19:25.701950 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert\") pod \"infra-operator-controller-manager-77c48c7859-c74jr\" (UID: \"2962f7bb-1d22-4715-b609-2eb6da1de834\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:19:25 crc kubenswrapper[4765]: I0121 13:19:25.709859 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2962f7bb-1d22-4715-b609-2eb6da1de834-cert\") pod \"infra-operator-controller-manager-77c48c7859-c74jr\" (UID: \"2962f7bb-1d22-4715-b609-2eb6da1de834\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:19:25 crc kubenswrapper[4765]: I0121 13:19:25.927060 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7q2wn" Jan 21 13:19:25 crc kubenswrapper[4765]: I0121 13:19:25.935819 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:19:26 crc kubenswrapper[4765]: I0121 13:19:26.310030 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7\" (UID: \"246657ac-def3-41ce-bd99-a8d00d97c86b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:19:26 crc kubenswrapper[4765]: I0121 13:19:26.314399 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/246657ac-def3-41ce-bd99-a8d00d97c86b-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7\" (UID: \"246657ac-def3-41ce-bd99-a8d00d97c86b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:19:26 crc kubenswrapper[4765]: I0121 13:19:26.496123 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9nnq9" Jan 21 13:19:26 crc kubenswrapper[4765]: I0121 13:19:26.504688 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:19:27 crc kubenswrapper[4765]: I0121 13:19:27.424634 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:19:27 crc kubenswrapper[4765]: I0121 13:19:27.431019 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af5f1c65-c317-4058-9d98-066b866bf83a-metrics-certs\") pod \"openstack-operator-controller-manager-75fcf77584-5dfd7\" (UID: \"af5f1c65-c317-4058-9d98-066b866bf83a\") " pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:19:27 crc kubenswrapper[4765]: I0121 13:19:27.640801 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-vqhrz" Jan 21 13:19:27 crc kubenswrapper[4765]: I0121 13:19:27.649751 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:19:29 crc kubenswrapper[4765]: E0121 13:19:29.463105 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71" Jan 21 13:19:29 crc kubenswrapper[4765]: E0121 13:19:29.463656 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f2zkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-8kq4g_openstack-operators(ecd5f054-6284-485a-8c41-6b2338a5c0f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:29 crc kubenswrapper[4765]: E0121 13:19:29.464866 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8kq4g" podUID="ecd5f054-6284-485a-8c41-6b2338a5c0f4" Jan 21 13:19:29 crc kubenswrapper[4765]: E0121 13:19:29.721990 4765 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1 is running failed: container process not found" containerID="4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 13:19:29 crc kubenswrapper[4765]: E0121 13:19:29.722452 4765 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1 is running failed: container process not found" containerID="4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 13:19:29 crc kubenswrapper[4765]: E0121 13:19:29.722839 4765 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1 is running failed: container process not found" containerID="4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 13:19:29 crc kubenswrapper[4765]: E0121 13:19:29.722878 4765 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-hm7zd" podUID="9528dde3-6eb5-4247-84e7-945a4fa7083b" containerName="registry-server" Jan 21 13:19:29 crc kubenswrapper[4765]: E0121 13:19:29.804899 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8kq4g" podUID="ecd5f054-6284-485a-8c41-6b2338a5c0f4" Jan 21 13:19:30 crc kubenswrapper[4765]: E0121 13:19:30.003526 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 21 13:19:30 crc kubenswrapper[4765]: E0121 13:19:30.003675 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-57dlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-kvhff_openstack-operators(17d3ffc3-5383-4beb-91d4-db120ddb1c74): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:30 crc kubenswrapper[4765]: E0121 13:19:30.005392 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-kvhff" podUID="17d3ffc3-5383-4beb-91d4-db120ddb1c74" Jan 21 13:19:30 crc kubenswrapper[4765]: E0121 13:19:30.697565 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 21 13:19:30 crc kubenswrapper[4765]: E0121 13:19:30.698159 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2b7hq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-dgbtx_openstack-operators(079ac5a2-3654-48e8-8bf0-597018fc2ca5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:30 crc kubenswrapper[4765]: E0121 13:19:30.699413 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dgbtx" podUID="079ac5a2-3654-48e8-8bf0-597018fc2ca5" Jan 21 13:19:30 crc kubenswrapper[4765]: E0121 13:19:30.812194 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-kvhff" podUID="17d3ffc3-5383-4beb-91d4-db120ddb1c74" Jan 21 13:19:30 crc kubenswrapper[4765]: E0121 13:19:30.812559 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dgbtx" podUID="079ac5a2-3654-48e8-8bf0-597018fc2ca5" Jan 21 13:19:31 crc kubenswrapper[4765]: E0121 13:19:31.332762 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e" Jan 21 13:19:31 crc kubenswrapper[4765]: E0121 13:19:31.332943 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xnmlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-s6zq8_openstack-operators(be3fcc93-c1a3-4191-8f75-4d8aa5767593): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:31 crc kubenswrapper[4765]: E0121 13:19:31.334796 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8" podUID="be3fcc93-c1a3-4191-8f75-4d8aa5767593" Jan 21 13:19:35 crc kubenswrapper[4765]: E0121 13:19:35.552121 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843" Jan 21 13:19:35 crc kubenswrapper[4765]: E0121 13:19:35.552717 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gdj98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-dhcgg_openstack-operators(4c4840ab-a9b6-4243-a2f8-e21eaa84f165): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:35 crc kubenswrapper[4765]: E0121 13:19:35.554046 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg" podUID="4c4840ab-a9b6-4243-a2f8-e21eaa84f165" Jan 21 13:19:36 crc kubenswrapper[4765]: E0121 13:19:36.070042 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32" Jan 21 13:19:36 crc kubenswrapper[4765]: E0121 13:19:36.070266 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pfn99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-rxxvb_openstack-operators(c78d0245-2ac0-4576-860f-20c8ad7f7fa3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:36 crc kubenswrapper[4765]: E0121 13:19:36.071490 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb" podUID="c78d0245-2ac0-4576-860f-20c8ad7f7fa3" Jan 21 13:19:38 crc kubenswrapper[4765]: E0121 13:19:38.117825 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729" Jan 21 13:19:38 crc kubenswrapper[4765]: E0121 13:19:38.119237 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wxj7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-kh677_openstack-operators(882965e2-7eb0-4971-9770-e750a8fe36dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:38 crc kubenswrapper[4765]: E0121 13:19:38.121063 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677" podUID="882965e2-7eb0-4971-9770-e750a8fe36dc" Jan 21 13:19:38 crc kubenswrapper[4765]: E0121 13:19:38.535128 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 21 13:19:38 crc kubenswrapper[4765]: E0121 13:19:38.535798 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8dgj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-ql7j4_openstack-operators(cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:38 crc kubenswrapper[4765]: E0121 13:19:38.537925 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ql7j4" podUID="cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99" Jan 21 13:19:38 crc kubenswrapper[4765]: E0121 13:19:38.871326 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ql7j4" podUID="cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99" Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.145936 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e" Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.146128 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9szsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-hv2dn_openstack-operators(30a8ff01-0173-45a7-9460-9df64146234d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.147428 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-hv2dn" podUID="30a8ff01-0173-45a7-9460-9df64146234d" Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.687862 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.688048 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2vfpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-m48zr_openstack-operators(953ef395-07f2-4b90-8232-77b94a176094): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.689356 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-m48zr" podUID="953ef395-07f2-4b90-8232-77b94a176094" Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.722028 4765 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1 is running failed: container process not found" containerID="4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.722525 4765 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1 is running failed: container process not found" containerID="4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.722841 4765 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1 is running failed: container process not found" containerID="4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.722925 4765 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-hm7zd" podUID="9528dde3-6eb5-4247-84e7-945a4fa7083b" containerName="registry-server" Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.835502 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.22:5001/openstack-k8s-operators/barbican-operator:fe54e3f5c518ca07d3ae28af196534f8b1dec3a3" Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.835613 4765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.22:5001/openstack-k8s-operators/barbican-operator:fe54e3f5c518ca07d3ae28af196534f8b1dec3a3" Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.835806 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.22:5001/openstack-k8s-operators/barbican-operator:fe54e3f5c518ca07d3ae28af196534f8b1dec3a3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zmttf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-848df65fbb-79lv9_openstack-operators(448c57b9-0176-42e1-a493-609bc853db01): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.836994 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-848df65fbb-79lv9" podUID="448c57b9-0176-42e1-a493-609bc853db01" Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.879952 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-m48zr" podUID="953ef395-07f2-4b90-8232-77b94a176094" Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.880038 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.22:5001/openstack-k8s-operators/barbican-operator:fe54e3f5c518ca07d3ae28af196534f8b1dec3a3\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-848df65fbb-79lv9" podUID="448c57b9-0176-42e1-a493-609bc853db01" Jan 21 13:19:39 crc kubenswrapper[4765]: E0121 13:19:39.880270 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-hv2dn" podUID="30a8ff01-0173-45a7-9460-9df64146234d" Jan 21 13:19:39 crc kubenswrapper[4765]: I0121 13:19:39.956187 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.112472 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk4bd\" (UniqueName: \"kubernetes.io/projected/9528dde3-6eb5-4247-84e7-945a4fa7083b-kube-api-access-tk4bd\") pod \"9528dde3-6eb5-4247-84e7-945a4fa7083b\" (UID: \"9528dde3-6eb5-4247-84e7-945a4fa7083b\") " Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.116156 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9528dde3-6eb5-4247-84e7-945a4fa7083b-catalog-content\") pod \"9528dde3-6eb5-4247-84e7-945a4fa7083b\" (UID: \"9528dde3-6eb5-4247-84e7-945a4fa7083b\") " Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.116347 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9528dde3-6eb5-4247-84e7-945a4fa7083b-utilities\") pod \"9528dde3-6eb5-4247-84e7-945a4fa7083b\" (UID: \"9528dde3-6eb5-4247-84e7-945a4fa7083b\") " Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.119137 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9528dde3-6eb5-4247-84e7-945a4fa7083b-utilities" (OuterVolumeSpecName: "utilities") pod "9528dde3-6eb5-4247-84e7-945a4fa7083b" (UID: "9528dde3-6eb5-4247-84e7-945a4fa7083b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.147601 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9528dde3-6eb5-4247-84e7-945a4fa7083b-kube-api-access-tk4bd" (OuterVolumeSpecName: "kube-api-access-tk4bd") pod "9528dde3-6eb5-4247-84e7-945a4fa7083b" (UID: "9528dde3-6eb5-4247-84e7-945a4fa7083b"). InnerVolumeSpecName "kube-api-access-tk4bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.214258 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9528dde3-6eb5-4247-84e7-945a4fa7083b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9528dde3-6eb5-4247-84e7-945a4fa7083b" (UID: "9528dde3-6eb5-4247-84e7-945a4fa7083b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.219065 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9528dde3-6eb5-4247-84e7-945a4fa7083b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.219104 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk4bd\" (UniqueName: \"kubernetes.io/projected/9528dde3-6eb5-4247-84e7-945a4fa7083b-kube-api-access-tk4bd\") on node \"crc\" DevicePath \"\"" Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.219114 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9528dde3-6eb5-4247-84e7-945a4fa7083b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.744807 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr"] Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.931595 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-r429h" event={"ID":"bdcf568f-99c9-4432-b763-ce16903da409","Type":"ContainerStarted","Data":"d0b3a21d5c12af3c58336132694373964d48e2320ee79740709febc15703470d"} Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.931701 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-r429h" Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.946505 4765 generic.go:334] "Generic (PLEG): container finished" podID="00652994-f3cc-4cd8-946c-670c24b0e8a7" containerID="4197c425cde6bc4bd50dfc43741ed3f45500c0e74b4a3e447c454fe3a3f1db29" exitCode=0 Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.946768 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btvw5" event={"ID":"00652994-f3cc-4cd8-946c-670c24b0e8a7","Type":"ContainerDied","Data":"4197c425cde6bc4bd50dfc43741ed3f45500c0e74b4a3e447c454fe3a3f1db29"} Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.969255 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-65hfk" event={"ID":"4c92e105-ba8b-4828-bc30-857c5431672f","Type":"ContainerStarted","Data":"1e1beb9428007b0711926545796fac32fd7d674d84af63de401f2d1eb71ab514"} Jan 21 13:19:40 crc kubenswrapper[4765]: I0121 13:19:40.970396 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-65hfk" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.000526 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wvkw"] Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.013832 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7"] Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.018942 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-r429h" podStartSLOduration=13.64155985 podStartE2EDuration="47.018904673s" podCreationTimestamp="2026-01-21 13:18:54 +0000 UTC" firstStartedPulling="2026-01-21 13:18:57.941329334 +0000 UTC m=+998.959055156" lastFinishedPulling="2026-01-21 13:19:31.318674157 +0000 UTC m=+1032.336399979" observedRunningTime="2026-01-21 13:19:40.96182354 +0000 UTC m=+1041.979549362" watchObservedRunningTime="2026-01-21 13:19:41.018904673 +0000 UTC m=+1042.036630495" Jan 21 13:19:41 crc kubenswrapper[4765]: W0121 13:19:41.030337 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod246657ac_def3_41ce_bd99_a8d00d97c86b.slice/crio-f3230df2f50ad8dec5b06c1caa74e3570438c77318ba0d4f71510aeb76be0494 WatchSource:0}: Error finding container f3230df2f50ad8dec5b06c1caa74e3570438c77318ba0d4f71510aeb76be0494: Status 404 returned error can't find the container with id f3230df2f50ad8dec5b06c1caa74e3570438c77318ba0d4f71510aeb76be0494 Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.039704 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7"] Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.043794 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-t42c2" event={"ID":"00c36135-159f-43be-be7c-b4f01cf2ace7","Type":"ContainerStarted","Data":"0da0611558742d23d46efbbe576436fecb8ca846b7733cb7ee15d6c09767eeff"} Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.044523 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-t42c2" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.082025 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-65hfk" podStartSLOduration=5.312580931 podStartE2EDuration="48.081989297s" podCreationTimestamp="2026-01-21 13:18:53 +0000 UTC" firstStartedPulling="2026-01-21 13:18:57.242197043 +0000 UTC m=+998.259922865" lastFinishedPulling="2026-01-21 13:19:40.011605409 +0000 UTC m=+1041.029331231" observedRunningTime="2026-01-21 13:19:41.074513514 +0000 UTC m=+1042.092239336" watchObservedRunningTime="2026-01-21 13:19:41.081989297 +0000 UTC m=+1042.099715119" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.085021 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"d6699bbbe2d11832c001ff2e320299357488d5335ab1941c1de1fb9e99aec3a1"} Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.108566 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm7zd" event={"ID":"9528dde3-6eb5-4247-84e7-945a4fa7083b","Type":"ContainerDied","Data":"9feef96e2bb5ec95ffa1a8a70a68a041c4516e58a404cd8ee9b38df10964dee5"} Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.108653 4765 scope.go:117] "RemoveContainer" containerID="4f0d9a7757e2fc5039c949a1574754da2b445aea81b984969d8c3f64311518a1" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.109093 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm7zd" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.212916 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rk4x7" event={"ID":"2a3c28ee-e170-4592-8291-db76c15675d1","Type":"ContainerStarted","Data":"12f4884499055f0dbbefb55cf5ec202cb8f2b1deef6c927be5156b414bbb69c5"} Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.213749 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rk4x7" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.234791 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-t42c2" podStartSLOduration=6.061664613 podStartE2EDuration="48.234769752s" podCreationTimestamp="2026-01-21 13:18:53 +0000 UTC" firstStartedPulling="2026-01-21 13:18:57.822283409 +0000 UTC m=+998.840009231" lastFinishedPulling="2026-01-21 13:19:39.995388548 +0000 UTC m=+1041.013114370" observedRunningTime="2026-01-21 13:19:41.119843813 +0000 UTC m=+1042.137569655" watchObservedRunningTime="2026-01-21 13:19:41.234769752 +0000 UTC m=+1042.252495574" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.249938 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-8r9cq" event={"ID":"2d19b122-8cf4-4b4a-8d31-037af2fd65fb","Type":"ContainerStarted","Data":"6e93332d97f674949614a74f4faa15e7db7f0dd38bc9b9f39fc7ff7269a24add"} Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.250689 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-8r9cq" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.274182 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-gh9vl" event={"ID":"c7a6160a-aef5-41af-b1cc-cc2cd97125d7","Type":"ContainerStarted","Data":"417de666c618e836130fe3c4b548dbe2679f14bbc3625ef72980b325f6ddd949"} Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.275016 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-gh9vl" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.277192 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rk4x7" podStartSLOduration=6.119369193 podStartE2EDuration="48.277171417s" podCreationTimestamp="2026-01-21 13:18:53 +0000 UTC" firstStartedPulling="2026-01-21 13:18:57.848768632 +0000 UTC m=+998.866494454" lastFinishedPulling="2026-01-21 13:19:40.006570866 +0000 UTC m=+1041.024296678" observedRunningTime="2026-01-21 13:19:41.27446805 +0000 UTC m=+1042.292193882" watchObservedRunningTime="2026-01-21 13:19:41.277171417 +0000 UTC m=+1042.294897239" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.333405 4765 scope.go:117] "RemoveContainer" containerID="06cee472d8bb2213748819f73825358be0afc0338f9125f8a596acf0caff0c19" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.341111 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hm7zd"] Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.341144 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-kq85p" event={"ID":"cd5b6743-7a2a-4d03-8adc-952fb87e6f02","Type":"ContainerStarted","Data":"707c62cef82f3022f8d659bc19d21c232ea73d6964c614700ebe0a65cfe8033e"} Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.341854 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-kq85p" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.382959 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hm7zd"] Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.394817 4765 scope.go:117] "RemoveContainer" containerID="05673f5a6f41b94010b66b42185564fdacd1cfc5b780ec50ad927c54c5ffa200" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.395453 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-8pvpr" event={"ID":"ab7eaa76-7a22-4d3c-85a3-9b643832d707","Type":"ContainerStarted","Data":"1debf3e1d7e5fe9995b7d079cd3f05f2160c92cdee91e45390693c0bf9070885"} Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.396762 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-8pvpr" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.418672 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-gh9vl" podStartSLOduration=14.493351903 podStartE2EDuration="47.4186404s" podCreationTimestamp="2026-01-21 13:18:54 +0000 UTC" firstStartedPulling="2026-01-21 13:18:58.392486584 +0000 UTC m=+999.410212406" lastFinishedPulling="2026-01-21 13:19:31.317775081 +0000 UTC m=+1032.335500903" observedRunningTime="2026-01-21 13:19:41.383957954 +0000 UTC m=+1042.401683786" watchObservedRunningTime="2026-01-21 13:19:41.4186404 +0000 UTC m=+1042.436366222" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.454452 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-8r9cq" podStartSLOduration=5.905206101 podStartE2EDuration="47.454431788s" podCreationTimestamp="2026-01-21 13:18:54 +0000 UTC" firstStartedPulling="2026-01-21 13:18:58.477333367 +0000 UTC m=+999.495059189" lastFinishedPulling="2026-01-21 13:19:40.026559054 +0000 UTC m=+1041.044284876" observedRunningTime="2026-01-21 13:19:41.45413822 +0000 UTC m=+1042.471864052" watchObservedRunningTime="2026-01-21 13:19:41.454431788 +0000 UTC m=+1042.472157610" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.550001 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-kq85p" podStartSLOduration=13.552390148 podStartE2EDuration="48.549979325s" podCreationTimestamp="2026-01-21 13:18:53 +0000 UTC" firstStartedPulling="2026-01-21 13:18:56.319725581 +0000 UTC m=+997.337451393" lastFinishedPulling="2026-01-21 13:19:31.317314748 +0000 UTC m=+1032.335040570" observedRunningTime="2026-01-21 13:19:41.527138036 +0000 UTC m=+1042.544863878" watchObservedRunningTime="2026-01-21 13:19:41.549979325 +0000 UTC m=+1042.567705157" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.582117 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-8pvpr" podStartSLOduration=6.341409528 podStartE2EDuration="48.582098709s" podCreationTimestamp="2026-01-21 13:18:53 +0000 UTC" firstStartedPulling="2026-01-21 13:18:57.791400121 +0000 UTC m=+998.809125943" lastFinishedPulling="2026-01-21 13:19:40.032089302 +0000 UTC m=+1041.049815124" observedRunningTime="2026-01-21 13:19:41.576984273 +0000 UTC m=+1042.594710095" watchObservedRunningTime="2026-01-21 13:19:41.582098709 +0000 UTC m=+1042.599824531" Jan 21 13:19:41 crc kubenswrapper[4765]: I0121 13:19:41.641516 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9528dde3-6eb5-4247-84e7-945a4fa7083b" path="/var/lib/kubelet/pods/9528dde3-6eb5-4247-84e7-945a4fa7083b/volumes" Jan 21 13:19:42 crc kubenswrapper[4765]: I0121 13:19:42.403905 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" event={"ID":"af5f1c65-c317-4058-9d98-066b866bf83a","Type":"ContainerStarted","Data":"a25e6fd084b74d7f4bf8475edb1d3e5cd8a4d8fcc7557745e54e133217198c4a"} Jan 21 13:19:42 crc kubenswrapper[4765]: I0121 13:19:42.407397 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wvkw" event={"ID":"47229e54-b901-49f9-9cf8-25f65374d9ee","Type":"ContainerStarted","Data":"caca28be76bb0bcde8bc8199036aaf74d1181756f9d565e9f6731f5bdbdc7243"} Jan 21 13:19:42 crc kubenswrapper[4765]: I0121 13:19:42.408539 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" event={"ID":"246657ac-def3-41ce-bd99-a8d00d97c86b","Type":"ContainerStarted","Data":"f3230df2f50ad8dec5b06c1caa74e3570438c77318ba0d4f71510aeb76be0494"} Jan 21 13:19:42 crc kubenswrapper[4765]: I0121 13:19:42.409467 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" event={"ID":"2962f7bb-1d22-4715-b609-2eb6da1de834","Type":"ContainerStarted","Data":"c50388a9a56d9f358e68d10d46d7408f297524f9446e27848f94159c7893aca4"} Jan 21 13:19:42 crc kubenswrapper[4765]: I0121 13:19:42.411949 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-97x9c" event={"ID":"2bc79302-e5a0-4288-8b2e-ee371eb775a1","Type":"ContainerStarted","Data":"cdcaf3e3fac145bfd932ab21eb8f6ee25c9aa1cf3f047abe7e6eeef9763711ff"} Jan 21 13:19:42 crc kubenswrapper[4765]: I0121 13:19:42.413912 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-97x9c" Jan 21 13:19:42 crc kubenswrapper[4765]: I0121 13:19:42.451692 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-97x9c" podStartSLOduration=6.924191828 podStartE2EDuration="48.451669657s" podCreationTimestamp="2026-01-21 13:18:54 +0000 UTC" firstStartedPulling="2026-01-21 13:18:58.486061435 +0000 UTC m=+999.503787267" lastFinishedPulling="2026-01-21 13:19:40.013539264 +0000 UTC m=+1041.031265096" observedRunningTime="2026-01-21 13:19:42.444245196 +0000 UTC m=+1043.461971018" watchObservedRunningTime="2026-01-21 13:19:42.451669657 +0000 UTC m=+1043.469395489" Jan 21 13:19:42 crc kubenswrapper[4765]: E0121 13:19:42.616234 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8" podUID="be3fcc93-c1a3-4191-8f75-4d8aa5767593" Jan 21 13:19:44 crc kubenswrapper[4765]: I0121 13:19:44.441793 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" event={"ID":"af5f1c65-c317-4058-9d98-066b866bf83a","Type":"ContainerStarted","Data":"093f41b71661c1b41a2bc7d7db932a50008ec0b34021913601543026644a6158"} Jan 21 13:19:44 crc kubenswrapper[4765]: I0121 13:19:44.442799 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:19:44 crc kubenswrapper[4765]: I0121 13:19:44.449103 4765 generic.go:334] "Generic (PLEG): container finished" podID="47229e54-b901-49f9-9cf8-25f65374d9ee" containerID="74c3a7005e9365beba788239d0104d773142527454419441bc7b525cbeee991c" exitCode=0 Jan 21 13:19:44 crc kubenswrapper[4765]: I0121 13:19:44.449473 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wvkw" event={"ID":"47229e54-b901-49f9-9cf8-25f65374d9ee","Type":"ContainerDied","Data":"74c3a7005e9365beba788239d0104d773142527454419441bc7b525cbeee991c"} Jan 21 13:19:44 crc kubenswrapper[4765]: I0121 13:19:44.468585 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btvw5" event={"ID":"00652994-f3cc-4cd8-946c-670c24b0e8a7","Type":"ContainerStarted","Data":"f931968548c653e721e07126459e74d892d7d62bd1316ea0a0f30d8d2b9d77fc"} Jan 21 13:19:44 crc kubenswrapper[4765]: I0121 13:19:44.494640 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" podStartSLOduration=50.494621382 podStartE2EDuration="50.494621382s" podCreationTimestamp="2026-01-21 13:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:19:44.488949711 +0000 UTC m=+1045.506675533" watchObservedRunningTime="2026-01-21 13:19:44.494621382 +0000 UTC m=+1045.512347204" Jan 21 13:19:44 crc kubenswrapper[4765]: I0121 13:19:44.529826 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-btvw5" podStartSLOduration=3.6060701760000002 podStartE2EDuration="45.529803983s" podCreationTimestamp="2026-01-21 13:18:59 +0000 UTC" firstStartedPulling="2026-01-21 13:19:01.968359381 +0000 UTC m=+1002.986085203" lastFinishedPulling="2026-01-21 13:19:43.892093198 +0000 UTC m=+1044.909819010" observedRunningTime="2026-01-21 13:19:44.524993216 +0000 UTC m=+1045.542719038" watchObservedRunningTime="2026-01-21 13:19:44.529803983 +0000 UTC m=+1045.547529805" Jan 21 13:19:45 crc kubenswrapper[4765]: I0121 13:19:45.491947 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-gh9vl" Jan 21 13:19:45 crc kubenswrapper[4765]: I0121 13:19:45.494398 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dgbtx" event={"ID":"079ac5a2-3654-48e8-8bf0-597018fc2ca5","Type":"ContainerStarted","Data":"556859bdaa9cb7224bc9ed9c7880fbc59320a5b014fcf0efabb2caf90a1b25c0"} Jan 21 13:19:45 crc kubenswrapper[4765]: I0121 13:19:45.494795 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dgbtx" Jan 21 13:19:45 crc kubenswrapper[4765]: I0121 13:19:45.499229 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8kq4g" event={"ID":"ecd5f054-6284-485a-8c41-6b2338a5c0f4","Type":"ContainerStarted","Data":"b57ff4ba8e5938809f812b35b0dadc60ed5965c517d07666fc9540cae6fba3b5"} Jan 21 13:19:45 crc kubenswrapper[4765]: I0121 13:19:45.499649 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8kq4g" Jan 21 13:19:45 crc kubenswrapper[4765]: I0121 13:19:45.501836 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-kvhff" event={"ID":"17d3ffc3-5383-4beb-91d4-db120ddb1c74","Type":"ContainerStarted","Data":"c06d88aa0aff00f5797faaa32c77d889caa773ec6fb28fb5abdae7dc02b8c628"} Jan 21 13:19:45 crc kubenswrapper[4765]: I0121 13:19:45.502259 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-kvhff" Jan 21 13:19:45 crc kubenswrapper[4765]: I0121 13:19:45.505978 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wvkw" event={"ID":"47229e54-b901-49f9-9cf8-25f65374d9ee","Type":"ContainerStarted","Data":"58ea8ce3a89e24e0a677d0ed4580f5b51f897894399694b7154ae611e85781ea"} Jan 21 13:19:45 crc kubenswrapper[4765]: I0121 13:19:45.623496 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dgbtx" podStartSLOduration=6.02776126 podStartE2EDuration="52.623458374s" podCreationTimestamp="2026-01-21 13:18:53 +0000 UTC" firstStartedPulling="2026-01-21 13:18:57.62540305 +0000 UTC m=+998.643128872" lastFinishedPulling="2026-01-21 13:19:44.221100164 +0000 UTC m=+1045.238825986" observedRunningTime="2026-01-21 13:19:45.615134907 +0000 UTC m=+1046.632860729" watchObservedRunningTime="2026-01-21 13:19:45.623458374 +0000 UTC m=+1046.641184196" Jan 21 13:19:45 crc kubenswrapper[4765]: I0121 13:19:45.644099 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-8r9cq" Jan 21 13:19:45 crc kubenswrapper[4765]: I0121 13:19:45.688686 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-kvhff" podStartSLOduration=5.793370222 podStartE2EDuration="51.688668338s" podCreationTimestamp="2026-01-21 13:18:54 +0000 UTC" firstStartedPulling="2026-01-21 13:18:58.466358775 +0000 UTC m=+999.484084597" lastFinishedPulling="2026-01-21 13:19:44.361656891 +0000 UTC m=+1045.379382713" observedRunningTime="2026-01-21 13:19:45.657809201 +0000 UTC m=+1046.675535033" watchObservedRunningTime="2026-01-21 13:19:45.688668338 +0000 UTC m=+1046.706394160" Jan 21 13:19:45 crc kubenswrapper[4765]: I0121 13:19:45.689305 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8kq4g" podStartSLOduration=6.245339507 podStartE2EDuration="52.689300796s" podCreationTimestamp="2026-01-21 13:18:53 +0000 UTC" firstStartedPulling="2026-01-21 13:18:57.77835459 +0000 UTC m=+998.796080412" lastFinishedPulling="2026-01-21 13:19:44.222315879 +0000 UTC m=+1045.240041701" observedRunningTime="2026-01-21 13:19:45.687657989 +0000 UTC m=+1046.705383831" watchObservedRunningTime="2026-01-21 13:19:45.689300796 +0000 UTC m=+1046.707026608" Jan 21 13:19:46 crc kubenswrapper[4765]: I0121 13:19:46.521727 4765 generic.go:334] "Generic (PLEG): container finished" podID="47229e54-b901-49f9-9cf8-25f65374d9ee" containerID="58ea8ce3a89e24e0a677d0ed4580f5b51f897894399694b7154ae611e85781ea" exitCode=0 Jan 21 13:19:46 crc kubenswrapper[4765]: I0121 13:19:46.523319 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wvkw" event={"ID":"47229e54-b901-49f9-9cf8-25f65374d9ee","Type":"ContainerDied","Data":"58ea8ce3a89e24e0a677d0ed4580f5b51f897894399694b7154ae611e85781ea"} Jan 21 13:19:47 crc kubenswrapper[4765]: E0121 13:19:47.615946 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg" podUID="4c4840ab-a9b6-4243-a2f8-e21eaa84f165" Jan 21 13:19:49 crc kubenswrapper[4765]: I0121 13:19:49.385554 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:19:49 crc kubenswrapper[4765]: I0121 13:19:49.386247 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:19:49 crc kubenswrapper[4765]: I0121 13:19:49.454738 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:19:49 crc kubenswrapper[4765]: I0121 13:19:49.591107 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:19:49 crc kubenswrapper[4765]: E0121 13:19:49.617360 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb" podUID="c78d0245-2ac0-4576-860f-20c8ad7f7fa3" Jan 21 13:19:49 crc kubenswrapper[4765]: I0121 13:19:49.701590 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btvw5"] Jan 21 13:19:50 crc kubenswrapper[4765]: I0121 13:19:50.554145 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wvkw" event={"ID":"47229e54-b901-49f9-9cf8-25f65374d9ee","Type":"ContainerStarted","Data":"768e92cc289e1b9f568da9c8aef75fa5ced51d30d492c2f15a3605f20f769a0d"} Jan 21 13:19:50 crc kubenswrapper[4765]: I0121 13:19:50.556851 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ql7j4" event={"ID":"cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99","Type":"ContainerStarted","Data":"e47660a29ddb35cf83b4c6e0f1b100de20f978e0b3c4a3713ff278fea7b937bf"} Jan 21 13:19:50 crc kubenswrapper[4765]: I0121 13:19:50.559097 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" event={"ID":"246657ac-def3-41ce-bd99-a8d00d97c86b","Type":"ContainerStarted","Data":"2205336364cd0508eff85679a5eaee5c2e768e890db099da8b3919cfff8fb2ee"} Jan 21 13:19:50 crc kubenswrapper[4765]: I0121 13:19:50.559277 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:19:50 crc kubenswrapper[4765]: I0121 13:19:50.560641 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" event={"ID":"2962f7bb-1d22-4715-b609-2eb6da1de834","Type":"ContainerStarted","Data":"0085922d6df7a500b51e5eaa4b68cf4c8942b8a56bb8ed98e19a7388a4192305"} Jan 21 13:19:50 crc kubenswrapper[4765]: I0121 13:19:50.581775 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4wvkw" podStartSLOduration=26.961921472 podStartE2EDuration="32.581753514s" podCreationTimestamp="2026-01-21 13:19:18 +0000 UTC" firstStartedPulling="2026-01-21 13:19:44.45197723 +0000 UTC m=+1045.469703042" lastFinishedPulling="2026-01-21 13:19:50.071809252 +0000 UTC m=+1051.089535084" observedRunningTime="2026-01-21 13:19:50.580175239 +0000 UTC m=+1051.597901061" watchObservedRunningTime="2026-01-21 13:19:50.581753514 +0000 UTC m=+1051.599479336" Jan 21 13:19:50 crc kubenswrapper[4765]: I0121 13:19:50.605216 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ql7j4" podStartSLOduration=3.951649594 podStartE2EDuration="55.60518572s" podCreationTimestamp="2026-01-21 13:18:55 +0000 UTC" firstStartedPulling="2026-01-21 13:18:58.460899089 +0000 UTC m=+999.478624901" lastFinishedPulling="2026-01-21 13:19:50.114435205 +0000 UTC m=+1051.132161027" observedRunningTime="2026-01-21 13:19:50.603932205 +0000 UTC m=+1051.621658037" watchObservedRunningTime="2026-01-21 13:19:50.60518572 +0000 UTC m=+1051.622911542" Jan 21 13:19:50 crc kubenswrapper[4765]: I0121 13:19:50.626953 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" podStartSLOduration=48.522136454 podStartE2EDuration="57.626930199s" podCreationTimestamp="2026-01-21 13:18:53 +0000 UTC" firstStartedPulling="2026-01-21 13:19:40.966809982 +0000 UTC m=+1041.984535804" lastFinishedPulling="2026-01-21 13:19:50.071603727 +0000 UTC m=+1051.089329549" observedRunningTime="2026-01-21 13:19:50.620100004 +0000 UTC m=+1051.637825816" watchObservedRunningTime="2026-01-21 13:19:50.626930199 +0000 UTC m=+1051.644656021" Jan 21 13:19:50 crc kubenswrapper[4765]: I0121 13:19:50.651627 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" podStartSLOduration=47.646825329 podStartE2EDuration="56.65161049s" podCreationTimestamp="2026-01-21 13:18:54 +0000 UTC" firstStartedPulling="2026-01-21 13:19:41.066830276 +0000 UTC m=+1042.084556098" lastFinishedPulling="2026-01-21 13:19:50.071615437 +0000 UTC m=+1051.089341259" observedRunningTime="2026-01-21 13:19:50.648282496 +0000 UTC m=+1051.666008318" watchObservedRunningTime="2026-01-21 13:19:50.65161049 +0000 UTC m=+1051.669336312" Jan 21 13:19:51 crc kubenswrapper[4765]: I0121 13:19:51.566229 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-btvw5" podUID="00652994-f3cc-4cd8-946c-670c24b0e8a7" containerName="registry-server" containerID="cri-o://f931968548c653e721e07126459e74d892d7d62bd1316ea0a0f30d8d2b9d77fc" gracePeriod=2 Jan 21 13:19:51 crc kubenswrapper[4765]: I0121 13:19:51.566877 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:19:52 crc kubenswrapper[4765]: I0121 13:19:52.580553 4765 generic.go:334] "Generic (PLEG): container finished" podID="00652994-f3cc-4cd8-946c-670c24b0e8a7" containerID="f931968548c653e721e07126459e74d892d7d62bd1316ea0a0f30d8d2b9d77fc" exitCode=0 Jan 21 13:19:52 crc kubenswrapper[4765]: I0121 13:19:52.582010 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btvw5" event={"ID":"00652994-f3cc-4cd8-946c-670c24b0e8a7","Type":"ContainerDied","Data":"f931968548c653e721e07126459e74d892d7d62bd1316ea0a0f30d8d2b9d77fc"} Jan 21 13:19:52 crc kubenswrapper[4765]: I0121 13:19:52.582053 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btvw5" event={"ID":"00652994-f3cc-4cd8-946c-670c24b0e8a7","Type":"ContainerDied","Data":"c1567f293f1b144d8580f4b900d7bcfde0b56462ecb1cfb15d8995f3d3ddb861"} Jan 21 13:19:52 crc kubenswrapper[4765]: I0121 13:19:52.582070 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1567f293f1b144d8580f4b900d7bcfde0b56462ecb1cfb15d8995f3d3ddb861" Jan 21 13:19:52 crc kubenswrapper[4765]: E0121 13:19:52.618879 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677" podUID="882965e2-7eb0-4971-9770-e750a8fe36dc" Jan 21 13:19:52 crc kubenswrapper[4765]: I0121 13:19:52.621045 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:19:52 crc kubenswrapper[4765]: I0121 13:19:52.779254 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00652994-f3cc-4cd8-946c-670c24b0e8a7-utilities\") pod \"00652994-f3cc-4cd8-946c-670c24b0e8a7\" (UID: \"00652994-f3cc-4cd8-946c-670c24b0e8a7\") " Jan 21 13:19:52 crc kubenswrapper[4765]: I0121 13:19:52.779867 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz9f8\" (UniqueName: \"kubernetes.io/projected/00652994-f3cc-4cd8-946c-670c24b0e8a7-kube-api-access-xz9f8\") pod \"00652994-f3cc-4cd8-946c-670c24b0e8a7\" (UID: \"00652994-f3cc-4cd8-946c-670c24b0e8a7\") " Jan 21 13:19:52 crc kubenswrapper[4765]: I0121 13:19:52.779899 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00652994-f3cc-4cd8-946c-670c24b0e8a7-catalog-content\") pod \"00652994-f3cc-4cd8-946c-670c24b0e8a7\" (UID: \"00652994-f3cc-4cd8-946c-670c24b0e8a7\") " Jan 21 13:19:52 crc kubenswrapper[4765]: I0121 13:19:52.780268 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00652994-f3cc-4cd8-946c-670c24b0e8a7-utilities" (OuterVolumeSpecName: "utilities") pod "00652994-f3cc-4cd8-946c-670c24b0e8a7" (UID: "00652994-f3cc-4cd8-946c-670c24b0e8a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:19:52 crc kubenswrapper[4765]: I0121 13:19:52.780451 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00652994-f3cc-4cd8-946c-670c24b0e8a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:19:52 crc kubenswrapper[4765]: I0121 13:19:52.787542 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00652994-f3cc-4cd8-946c-670c24b0e8a7-kube-api-access-xz9f8" (OuterVolumeSpecName: "kube-api-access-xz9f8") pod "00652994-f3cc-4cd8-946c-670c24b0e8a7" (UID: "00652994-f3cc-4cd8-946c-670c24b0e8a7"). InnerVolumeSpecName "kube-api-access-xz9f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:19:52 crc kubenswrapper[4765]: I0121 13:19:52.804665 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00652994-f3cc-4cd8-946c-670c24b0e8a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00652994-f3cc-4cd8-946c-670c24b0e8a7" (UID: "00652994-f3cc-4cd8-946c-670c24b0e8a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:19:52 crc kubenswrapper[4765]: I0121 13:19:52.881879 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz9f8\" (UniqueName: \"kubernetes.io/projected/00652994-f3cc-4cd8-946c-670c24b0e8a7-kube-api-access-xz9f8\") on node \"crc\" DevicePath \"\"" Jan 21 13:19:52 crc kubenswrapper[4765]: I0121 13:19:52.881931 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00652994-f3cc-4cd8-946c-670c24b0e8a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:19:53 crc kubenswrapper[4765]: I0121 13:19:53.590822 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-848df65fbb-79lv9" event={"ID":"448c57b9-0176-42e1-a493-609bc853db01","Type":"ContainerStarted","Data":"5c6823bd9412e86fa91687bba50cc2e13fd43157a77b0d4415895c6e16bbfa6e"} Jan 21 13:19:53 crc kubenswrapper[4765]: I0121 13:19:53.594789 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-848df65fbb-79lv9" Jan 21 13:19:53 crc kubenswrapper[4765]: I0121 13:19:53.598243 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btvw5" Jan 21 13:19:53 crc kubenswrapper[4765]: I0121 13:19:53.602291 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-m48zr" event={"ID":"953ef395-07f2-4b90-8232-77b94a176094","Type":"ContainerStarted","Data":"21f550e9ecb9c01c44eaf82a95f0fb3b0638ae02c03d397bc9b578e3c8bc5ca5"} Jan 21 13:19:53 crc kubenswrapper[4765]: I0121 13:19:53.602528 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-m48zr" Jan 21 13:19:53 crc kubenswrapper[4765]: I0121 13:19:53.644381 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-848df65fbb-79lv9" podStartSLOduration=5.168512705 podStartE2EDuration="1m0.644354645s" podCreationTimestamp="2026-01-21 13:18:53 +0000 UTC" firstStartedPulling="2026-01-21 13:18:57.684053038 +0000 UTC m=+998.701778860" lastFinishedPulling="2026-01-21 13:19:53.159894978 +0000 UTC m=+1054.177620800" observedRunningTime="2026-01-21 13:19:53.633707462 +0000 UTC m=+1054.651433304" watchObservedRunningTime="2026-01-21 13:19:53.644354645 +0000 UTC m=+1054.662080467" Jan 21 13:19:53 crc kubenswrapper[4765]: I0121 13:19:53.660025 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-m48zr" podStartSLOduration=4.833058083 podStartE2EDuration="59.6600008s" podCreationTimestamp="2026-01-21 13:18:54 +0000 UTC" firstStartedPulling="2026-01-21 13:18:58.473953631 +0000 UTC m=+999.491679453" lastFinishedPulling="2026-01-21 13:19:53.300896348 +0000 UTC m=+1054.318622170" observedRunningTime="2026-01-21 13:19:53.654641578 +0000 UTC m=+1054.672367400" watchObservedRunningTime="2026-01-21 13:19:53.6600008 +0000 UTC m=+1054.677726622" Jan 21 13:19:53 crc kubenswrapper[4765]: I0121 13:19:53.675236 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btvw5"] Jan 21 13:19:53 crc kubenswrapper[4765]: I0121 13:19:53.682607 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-btvw5"] Jan 21 13:19:53 crc kubenswrapper[4765]: I0121 13:19:53.901980 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-kq85p" Jan 21 13:19:53 crc kubenswrapper[4765]: I0121 13:19:53.935892 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-dgbtx" Jan 21 13:19:53 crc kubenswrapper[4765]: I0121 13:19:53.969452 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-8pvpr" Jan 21 13:19:53 crc kubenswrapper[4765]: I0121 13:19:53.990110 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-65hfk" Jan 21 13:19:54 crc kubenswrapper[4765]: I0121 13:19:54.155851 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-rk4x7" Jan 21 13:19:54 crc kubenswrapper[4765]: I0121 13:19:54.255724 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8kq4g" Jan 21 13:19:54 crc kubenswrapper[4765]: I0121 13:19:54.364182 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-t42c2" Jan 21 13:19:54 crc kubenswrapper[4765]: I0121 13:19:54.824013 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-kvhff" Jan 21 13:19:54 crc kubenswrapper[4765]: I0121 13:19:54.867075 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-r429h" Jan 21 13:19:54 crc kubenswrapper[4765]: I0121 13:19:54.893597 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-97x9c" Jan 21 13:19:55 crc kubenswrapper[4765]: I0121 13:19:55.647178 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00652994-f3cc-4cd8-946c-670c24b0e8a7" path="/var/lib/kubelet/pods/00652994-f3cc-4cd8-946c-670c24b0e8a7/volumes" Jan 21 13:19:55 crc kubenswrapper[4765]: I0121 13:19:55.665455 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8" event={"ID":"be3fcc93-c1a3-4191-8f75-4d8aa5767593","Type":"ContainerStarted","Data":"67acd37d13e06553742a0156ac4a7f710442b89a1f3f12485217a340837bc343"} Jan 21 13:19:55 crc kubenswrapper[4765]: I0121 13:19:55.666690 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8" Jan 21 13:19:55 crc kubenswrapper[4765]: I0121 13:19:55.692609 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8" podStartSLOduration=5.094516158 podStartE2EDuration="1m1.692580861s" podCreationTimestamp="2026-01-21 13:18:54 +0000 UTC" firstStartedPulling="2026-01-21 13:18:58.502743759 +0000 UTC m=+999.520469581" lastFinishedPulling="2026-01-21 13:19:55.100808462 +0000 UTC m=+1056.118534284" observedRunningTime="2026-01-21 13:19:55.689546684 +0000 UTC m=+1056.707272526" watchObservedRunningTime="2026-01-21 13:19:55.692580861 +0000 UTC m=+1056.710306683" Jan 21 13:19:55 crc kubenswrapper[4765]: I0121 13:19:55.942896 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-c74jr" Jan 21 13:19:56 crc kubenswrapper[4765]: I0121 13:19:56.545480 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7" Jan 21 13:19:57 crc kubenswrapper[4765]: I0121 13:19:57.658108 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75fcf77584-5dfd7" Jan 21 13:19:57 crc kubenswrapper[4765]: I0121 13:19:57.952997 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-hv2dn" event={"ID":"30a8ff01-0173-45a7-9460-9df64146234d","Type":"ContainerStarted","Data":"0bc988737b32d8220099f3a2cfb1e20d89c700263d7e6a16caf245d661dc2e0e"} Jan 21 13:19:57 crc kubenswrapper[4765]: I0121 13:19:57.953271 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-hv2dn" Jan 21 13:19:57 crc kubenswrapper[4765]: I0121 13:19:57.977733 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-hv2dn" podStartSLOduration=5.740603764 podStartE2EDuration="1m4.977709874s" podCreationTimestamp="2026-01-21 13:18:53 +0000 UTC" firstStartedPulling="2026-01-21 13:18:57.939734399 +0000 UTC m=+998.957460221" lastFinishedPulling="2026-01-21 13:19:57.176840509 +0000 UTC m=+1058.194566331" observedRunningTime="2026-01-21 13:19:57.975409568 +0000 UTC m=+1058.993135390" watchObservedRunningTime="2026-01-21 13:19:57.977709874 +0000 UTC m=+1058.995435696" Jan 21 13:19:59 crc kubenswrapper[4765]: I0121 13:19:59.017663 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:19:59 crc kubenswrapper[4765]: I0121 13:19:59.017843 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:19:59 crc kubenswrapper[4765]: I0121 13:19:59.062875 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:20:00 crc kubenswrapper[4765]: I0121 13:20:00.015676 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:20:00 crc kubenswrapper[4765]: I0121 13:20:00.063684 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wvkw"] Jan 21 13:20:01 crc kubenswrapper[4765]: I0121 13:20:01.987643 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg" event={"ID":"4c4840ab-a9b6-4243-a2f8-e21eaa84f165","Type":"ContainerStarted","Data":"632d44c16123ab6bc51eaf5c2cb50ab3f31a376e6b518e4d5c384a70ee064696"} Jan 21 13:20:01 crc kubenswrapper[4765]: I0121 13:20:01.987876 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4wvkw" podUID="47229e54-b901-49f9-9cf8-25f65374d9ee" containerName="registry-server" containerID="cri-o://768e92cc289e1b9f568da9c8aef75fa5ced51d30d492c2f15a3605f20f769a0d" gracePeriod=2 Jan 21 13:20:01 crc kubenswrapper[4765]: I0121 13:20:01.988611 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg" Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.021941 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg" podStartSLOduration=5.452095426 podStartE2EDuration="1m8.021907349s" podCreationTimestamp="2026-01-21 13:18:54 +0000 UTC" firstStartedPulling="2026-01-21 13:18:58.500577728 +0000 UTC m=+999.518303550" lastFinishedPulling="2026-01-21 13:20:01.070389661 +0000 UTC m=+1062.088115473" observedRunningTime="2026-01-21 13:20:02.014679073 +0000 UTC m=+1063.032404895" watchObservedRunningTime="2026-01-21 13:20:02.021907349 +0000 UTC m=+1063.039633181" Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.391703 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.406092 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47229e54-b901-49f9-9cf8-25f65374d9ee-utilities\") pod \"47229e54-b901-49f9-9cf8-25f65374d9ee\" (UID: \"47229e54-b901-49f9-9cf8-25f65374d9ee\") " Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.406160 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47229e54-b901-49f9-9cf8-25f65374d9ee-catalog-content\") pod \"47229e54-b901-49f9-9cf8-25f65374d9ee\" (UID: \"47229e54-b901-49f9-9cf8-25f65374d9ee\") " Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.406267 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvdmw\" (UniqueName: \"kubernetes.io/projected/47229e54-b901-49f9-9cf8-25f65374d9ee-kube-api-access-jvdmw\") pod \"47229e54-b901-49f9-9cf8-25f65374d9ee\" (UID: \"47229e54-b901-49f9-9cf8-25f65374d9ee\") " Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.407715 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47229e54-b901-49f9-9cf8-25f65374d9ee-utilities" (OuterVolumeSpecName: "utilities") pod "47229e54-b901-49f9-9cf8-25f65374d9ee" (UID: "47229e54-b901-49f9-9cf8-25f65374d9ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.414390 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47229e54-b901-49f9-9cf8-25f65374d9ee-kube-api-access-jvdmw" (OuterVolumeSpecName: "kube-api-access-jvdmw") pod "47229e54-b901-49f9-9cf8-25f65374d9ee" (UID: "47229e54-b901-49f9-9cf8-25f65374d9ee"). InnerVolumeSpecName "kube-api-access-jvdmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.479725 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47229e54-b901-49f9-9cf8-25f65374d9ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47229e54-b901-49f9-9cf8-25f65374d9ee" (UID: "47229e54-b901-49f9-9cf8-25f65374d9ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.508651 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvdmw\" (UniqueName: \"kubernetes.io/projected/47229e54-b901-49f9-9cf8-25f65374d9ee-kube-api-access-jvdmw\") on node \"crc\" DevicePath \"\"" Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.508701 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47229e54-b901-49f9-9cf8-25f65374d9ee-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.508718 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47229e54-b901-49f9-9cf8-25f65374d9ee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.998438 4765 generic.go:334] "Generic (PLEG): container finished" podID="47229e54-b901-49f9-9cf8-25f65374d9ee" containerID="768e92cc289e1b9f568da9c8aef75fa5ced51d30d492c2f15a3605f20f769a0d" exitCode=0 Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.998531 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wvkw" event={"ID":"47229e54-b901-49f9-9cf8-25f65374d9ee","Type":"ContainerDied","Data":"768e92cc289e1b9f568da9c8aef75fa5ced51d30d492c2f15a3605f20f769a0d"} Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.998571 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wvkw" event={"ID":"47229e54-b901-49f9-9cf8-25f65374d9ee","Type":"ContainerDied","Data":"caca28be76bb0bcde8bc8199036aaf74d1181756f9d565e9f6731f5bdbdc7243"} Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.998601 4765 scope.go:117] "RemoveContainer" containerID="768e92cc289e1b9f568da9c8aef75fa5ced51d30d492c2f15a3605f20f769a0d" Jan 21 13:20:02 crc kubenswrapper[4765]: I0121 13:20:02.998642 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wvkw" Jan 21 13:20:03 crc kubenswrapper[4765]: I0121 13:20:03.001930 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb" event={"ID":"c78d0245-2ac0-4576-860f-20c8ad7f7fa3","Type":"ContainerStarted","Data":"39bd20ab28defee033046007709d55962af7ef25f3a59b5bb4962fdf00f7beb2"} Jan 21 13:20:03 crc kubenswrapper[4765]: I0121 13:20:03.002425 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb" Jan 21 13:20:03 crc kubenswrapper[4765]: I0121 13:20:03.018631 4765 scope.go:117] "RemoveContainer" containerID="58ea8ce3a89e24e0a677d0ed4580f5b51f897894399694b7154ae611e85781ea" Jan 21 13:20:03 crc kubenswrapper[4765]: I0121 13:20:03.025676 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb" podStartSLOduration=6.290200673 podStartE2EDuration="1m10.025656463s" podCreationTimestamp="2026-01-21 13:18:53 +0000 UTC" firstStartedPulling="2026-01-21 13:18:58.500680731 +0000 UTC m=+999.518406553" lastFinishedPulling="2026-01-21 13:20:02.236136521 +0000 UTC m=+1063.253862343" observedRunningTime="2026-01-21 13:20:03.023375328 +0000 UTC m=+1064.041101160" watchObservedRunningTime="2026-01-21 13:20:03.025656463 +0000 UTC m=+1064.043382285" Jan 21 13:20:03 crc kubenswrapper[4765]: I0121 13:20:03.054546 4765 scope.go:117] "RemoveContainer" containerID="74c3a7005e9365beba788239d0104d773142527454419441bc7b525cbeee991c" Jan 21 13:20:03 crc kubenswrapper[4765]: I0121 13:20:03.059827 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wvkw"] Jan 21 13:20:03 crc kubenswrapper[4765]: I0121 13:20:03.066597 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4wvkw"] Jan 21 13:20:03 crc kubenswrapper[4765]: I0121 13:20:03.078178 4765 scope.go:117] "RemoveContainer" containerID="768e92cc289e1b9f568da9c8aef75fa5ced51d30d492c2f15a3605f20f769a0d" Jan 21 13:20:03 crc kubenswrapper[4765]: E0121 13:20:03.080783 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"768e92cc289e1b9f568da9c8aef75fa5ced51d30d492c2f15a3605f20f769a0d\": container with ID starting with 768e92cc289e1b9f568da9c8aef75fa5ced51d30d492c2f15a3605f20f769a0d not found: ID does not exist" containerID="768e92cc289e1b9f568da9c8aef75fa5ced51d30d492c2f15a3605f20f769a0d" Jan 21 13:20:03 crc kubenswrapper[4765]: I0121 13:20:03.080834 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"768e92cc289e1b9f568da9c8aef75fa5ced51d30d492c2f15a3605f20f769a0d"} err="failed to get container status \"768e92cc289e1b9f568da9c8aef75fa5ced51d30d492c2f15a3605f20f769a0d\": rpc error: code = NotFound desc = could not find container \"768e92cc289e1b9f568da9c8aef75fa5ced51d30d492c2f15a3605f20f769a0d\": container with ID starting with 768e92cc289e1b9f568da9c8aef75fa5ced51d30d492c2f15a3605f20f769a0d not found: ID does not exist" Jan 21 13:20:03 crc kubenswrapper[4765]: I0121 13:20:03.080873 4765 scope.go:117] "RemoveContainer" containerID="58ea8ce3a89e24e0a677d0ed4580f5b51f897894399694b7154ae611e85781ea" Jan 21 13:20:03 crc kubenswrapper[4765]: E0121 13:20:03.081454 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ea8ce3a89e24e0a677d0ed4580f5b51f897894399694b7154ae611e85781ea\": container with ID starting with 58ea8ce3a89e24e0a677d0ed4580f5b51f897894399694b7154ae611e85781ea not found: ID does not exist" containerID="58ea8ce3a89e24e0a677d0ed4580f5b51f897894399694b7154ae611e85781ea" Jan 21 13:20:03 crc kubenswrapper[4765]: I0121 13:20:03.081502 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ea8ce3a89e24e0a677d0ed4580f5b51f897894399694b7154ae611e85781ea"} err="failed to get container status \"58ea8ce3a89e24e0a677d0ed4580f5b51f897894399694b7154ae611e85781ea\": rpc error: code = NotFound desc = could not find container \"58ea8ce3a89e24e0a677d0ed4580f5b51f897894399694b7154ae611e85781ea\": container with ID starting with 58ea8ce3a89e24e0a677d0ed4580f5b51f897894399694b7154ae611e85781ea not found: ID does not exist" Jan 21 13:20:03 crc kubenswrapper[4765]: I0121 13:20:03.081520 4765 scope.go:117] "RemoveContainer" containerID="74c3a7005e9365beba788239d0104d773142527454419441bc7b525cbeee991c" Jan 21 13:20:03 crc kubenswrapper[4765]: E0121 13:20:03.081789 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74c3a7005e9365beba788239d0104d773142527454419441bc7b525cbeee991c\": container with ID starting with 74c3a7005e9365beba788239d0104d773142527454419441bc7b525cbeee991c not found: ID does not exist" containerID="74c3a7005e9365beba788239d0104d773142527454419441bc7b525cbeee991c" Jan 21 13:20:03 crc kubenswrapper[4765]: I0121 13:20:03.081826 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c3a7005e9365beba788239d0104d773142527454419441bc7b525cbeee991c"} err="failed to get container status \"74c3a7005e9365beba788239d0104d773142527454419441bc7b525cbeee991c\": rpc error: code = NotFound desc = could not find container \"74c3a7005e9365beba788239d0104d773142527454419441bc7b525cbeee991c\": container with ID starting with 74c3a7005e9365beba788239d0104d773142527454419441bc7b525cbeee991c not found: ID does not exist" Jan 21 13:20:03 crc kubenswrapper[4765]: I0121 13:20:03.622967 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47229e54-b901-49f9-9cf8-25f65374d9ee" path="/var/lib/kubelet/pods/47229e54-b901-49f9-9cf8-25f65374d9ee/volumes" Jan 21 13:20:03 crc kubenswrapper[4765]: I0121 13:20:03.883139 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-848df65fbb-79lv9" Jan 21 13:20:04 crc kubenswrapper[4765]: I0121 13:20:04.464038 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-hv2dn" Jan 21 13:20:04 crc kubenswrapper[4765]: I0121 13:20:04.610713 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-m48zr" Jan 21 13:20:05 crc kubenswrapper[4765]: I0121 13:20:05.531976 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-s6zq8" Jan 21 13:20:06 crc kubenswrapper[4765]: I0121 13:20:06.022146 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677" event={"ID":"882965e2-7eb0-4971-9770-e750a8fe36dc","Type":"ContainerStarted","Data":"1b72ab145ae92e51e02c3a16e06bffad30bb27565b4538db88bd438c4bddb6b8"} Jan 21 13:20:06 crc kubenswrapper[4765]: I0121 13:20:06.022332 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677" Jan 21 13:20:06 crc kubenswrapper[4765]: I0121 13:20:06.039054 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677" podStartSLOduration=6.159876626 podStartE2EDuration="1m13.039036134s" podCreationTimestamp="2026-01-21 13:18:53 +0000 UTC" firstStartedPulling="2026-01-21 13:18:58.500900497 +0000 UTC m=+999.518626319" lastFinishedPulling="2026-01-21 13:20:05.380060005 +0000 UTC m=+1066.397785827" observedRunningTime="2026-01-21 13:20:06.036971716 +0000 UTC m=+1067.054697538" watchObservedRunningTime="2026-01-21 13:20:06.039036134 +0000 UTC m=+1067.056761956" Jan 21 13:20:14 crc kubenswrapper[4765]: I0121 13:20:14.465564 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rxxvb" Jan 21 13:20:14 crc kubenswrapper[4765]: I0121 13:20:14.666379 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-kh677" Jan 21 13:20:15 crc kubenswrapper[4765]: I0121 13:20:15.491075 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dhcgg" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.155135 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wj7pv"] Jan 21 13:20:33 crc kubenswrapper[4765]: E0121 13:20:33.156110 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00652994-f3cc-4cd8-946c-670c24b0e8a7" containerName="extract-utilities" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.156127 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="00652994-f3cc-4cd8-946c-670c24b0e8a7" containerName="extract-utilities" Jan 21 13:20:33 crc kubenswrapper[4765]: E0121 13:20:33.156144 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9528dde3-6eb5-4247-84e7-945a4fa7083b" containerName="extract-content" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.156151 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9528dde3-6eb5-4247-84e7-945a4fa7083b" containerName="extract-content" Jan 21 13:20:33 crc kubenswrapper[4765]: E0121 13:20:33.156163 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47229e54-b901-49f9-9cf8-25f65374d9ee" containerName="extract-content" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.156170 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="47229e54-b901-49f9-9cf8-25f65374d9ee" containerName="extract-content" Jan 21 13:20:33 crc kubenswrapper[4765]: E0121 13:20:33.156180 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9528dde3-6eb5-4247-84e7-945a4fa7083b" containerName="extract-utilities" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.156187 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9528dde3-6eb5-4247-84e7-945a4fa7083b" containerName="extract-utilities" Jan 21 13:20:33 crc kubenswrapper[4765]: E0121 13:20:33.156220 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9528dde3-6eb5-4247-84e7-945a4fa7083b" containerName="registry-server" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.156238 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9528dde3-6eb5-4247-84e7-945a4fa7083b" containerName="registry-server" Jan 21 13:20:33 crc kubenswrapper[4765]: E0121 13:20:33.156249 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47229e54-b901-49f9-9cf8-25f65374d9ee" containerName="extract-utilities" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.156259 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="47229e54-b901-49f9-9cf8-25f65374d9ee" containerName="extract-utilities" Jan 21 13:20:33 crc kubenswrapper[4765]: E0121 13:20:33.156271 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00652994-f3cc-4cd8-946c-670c24b0e8a7" containerName="extract-content" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.156279 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="00652994-f3cc-4cd8-946c-670c24b0e8a7" containerName="extract-content" Jan 21 13:20:33 crc kubenswrapper[4765]: E0121 13:20:33.156296 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47229e54-b901-49f9-9cf8-25f65374d9ee" containerName="registry-server" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.156303 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="47229e54-b901-49f9-9cf8-25f65374d9ee" containerName="registry-server" Jan 21 13:20:33 crc kubenswrapper[4765]: E0121 13:20:33.156321 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00652994-f3cc-4cd8-946c-670c24b0e8a7" containerName="registry-server" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.156330 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="00652994-f3cc-4cd8-946c-670c24b0e8a7" containerName="registry-server" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.156489 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="47229e54-b901-49f9-9cf8-25f65374d9ee" containerName="registry-server" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.156509 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="00652994-f3cc-4cd8-946c-670c24b0e8a7" containerName="registry-server" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.156524 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9528dde3-6eb5-4247-84e7-945a4fa7083b" containerName="registry-server" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.157530 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wj7pv" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.159995 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.171330 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-svdpr" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.171479 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.175716 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.177927 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wj7pv"] Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.273872 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6rd5t"] Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.275451 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.281688 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.293103 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6rd5t"] Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.324775 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpmdc\" (UniqueName: \"kubernetes.io/projected/29c45b3c-a3c8-4480-895c-a86ec81ede26-kube-api-access-mpmdc\") pod \"dnsmasq-dns-675f4bcbfc-wj7pv\" (UID: \"29c45b3c-a3c8-4480-895c-a86ec81ede26\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wj7pv" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.324851 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c45b3c-a3c8-4480-895c-a86ec81ede26-config\") pod \"dnsmasq-dns-675f4bcbfc-wj7pv\" (UID: \"29c45b3c-a3c8-4480-895c-a86ec81ede26\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wj7pv" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.425812 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpmdc\" (UniqueName: \"kubernetes.io/projected/29c45b3c-a3c8-4480-895c-a86ec81ede26-kube-api-access-mpmdc\") pod \"dnsmasq-dns-675f4bcbfc-wj7pv\" (UID: \"29c45b3c-a3c8-4480-895c-a86ec81ede26\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wj7pv" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.425883 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zvtr\" (UniqueName: \"kubernetes.io/projected/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-kube-api-access-5zvtr\") pod \"dnsmasq-dns-78dd6ddcc-6rd5t\" (UID: \"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.425926 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6rd5t\" (UID: \"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.425947 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-config\") pod \"dnsmasq-dns-78dd6ddcc-6rd5t\" (UID: \"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.426856 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c45b3c-a3c8-4480-895c-a86ec81ede26-config\") pod \"dnsmasq-dns-675f4bcbfc-wj7pv\" (UID: \"29c45b3c-a3c8-4480-895c-a86ec81ede26\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wj7pv" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.428143 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c45b3c-a3c8-4480-895c-a86ec81ede26-config\") pod \"dnsmasq-dns-675f4bcbfc-wj7pv\" (UID: \"29c45b3c-a3c8-4480-895c-a86ec81ede26\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wj7pv" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.454587 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpmdc\" (UniqueName: \"kubernetes.io/projected/29c45b3c-a3c8-4480-895c-a86ec81ede26-kube-api-access-mpmdc\") pod \"dnsmasq-dns-675f4bcbfc-wj7pv\" (UID: \"29c45b3c-a3c8-4480-895c-a86ec81ede26\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wj7pv" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.477283 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wj7pv" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.528034 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zvtr\" (UniqueName: \"kubernetes.io/projected/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-kube-api-access-5zvtr\") pod \"dnsmasq-dns-78dd6ddcc-6rd5t\" (UID: \"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.528121 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6rd5t\" (UID: \"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.528163 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-config\") pod \"dnsmasq-dns-78dd6ddcc-6rd5t\" (UID: \"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.529536 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-config\") pod \"dnsmasq-dns-78dd6ddcc-6rd5t\" (UID: \"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.531048 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6rd5t\" (UID: \"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.554826 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zvtr\" (UniqueName: \"kubernetes.io/projected/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-kube-api-access-5zvtr\") pod \"dnsmasq-dns-78dd6ddcc-6rd5t\" (UID: \"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" Jan 21 13:20:33 crc kubenswrapper[4765]: I0121 13:20:33.591086 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" Jan 21 13:20:34 crc kubenswrapper[4765]: I0121 13:20:34.038444 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wj7pv"] Jan 21 13:20:34 crc kubenswrapper[4765]: I0121 13:20:34.133819 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6rd5t"] Jan 21 13:20:34 crc kubenswrapper[4765]: W0121 13:20:34.136867 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b9057ca_badb_4b3d_95e9_ec32dcd52f2e.slice/crio-5de449948007eac2577df71c51baa6e28c6a8606cb12fe0a64ff1565f6760049 WatchSource:0}: Error finding container 5de449948007eac2577df71c51baa6e28c6a8606cb12fe0a64ff1565f6760049: Status 404 returned error can't find the container with id 5de449948007eac2577df71c51baa6e28c6a8606cb12fe0a64ff1565f6760049 Jan 21 13:20:34 crc kubenswrapper[4765]: I0121 13:20:34.244506 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" event={"ID":"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e","Type":"ContainerStarted","Data":"5de449948007eac2577df71c51baa6e28c6a8606cb12fe0a64ff1565f6760049"} Jan 21 13:20:34 crc kubenswrapper[4765]: I0121 13:20:34.246550 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-wj7pv" event={"ID":"29c45b3c-a3c8-4480-895c-a86ec81ede26","Type":"ContainerStarted","Data":"25d00f1a25629548f8b4e0fc0e9fc8ffd875b229a98b863ba04c7997e0e058cc"} Jan 21 13:20:35 crc kubenswrapper[4765]: I0121 13:20:35.898421 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wj7pv"] Jan 21 13:20:35 crc kubenswrapper[4765]: I0121 13:20:35.937885 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5wtns"] Jan 21 13:20:35 crc kubenswrapper[4765]: I0121 13:20:35.939704 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5wtns" Jan 21 13:20:35 crc kubenswrapper[4765]: I0121 13:20:35.970912 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5wtns"] Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.039558 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d732c2fd-5d00-4ab9-8482-dc376f1924cb-config\") pod \"dnsmasq-dns-666b6646f7-5wtns\" (UID: \"d732c2fd-5d00-4ab9-8482-dc376f1924cb\") " pod="openstack/dnsmasq-dns-666b6646f7-5wtns" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.039746 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv94r\" (UniqueName: \"kubernetes.io/projected/d732c2fd-5d00-4ab9-8482-dc376f1924cb-kube-api-access-cv94r\") pod \"dnsmasq-dns-666b6646f7-5wtns\" (UID: \"d732c2fd-5d00-4ab9-8482-dc376f1924cb\") " pod="openstack/dnsmasq-dns-666b6646f7-5wtns" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.039800 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d732c2fd-5d00-4ab9-8482-dc376f1924cb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5wtns\" (UID: \"d732c2fd-5d00-4ab9-8482-dc376f1924cb\") " pod="openstack/dnsmasq-dns-666b6646f7-5wtns" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.140496 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d732c2fd-5d00-4ab9-8482-dc376f1924cb-config\") pod \"dnsmasq-dns-666b6646f7-5wtns\" (UID: \"d732c2fd-5d00-4ab9-8482-dc376f1924cb\") " pod="openstack/dnsmasq-dns-666b6646f7-5wtns" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.140584 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv94r\" (UniqueName: \"kubernetes.io/projected/d732c2fd-5d00-4ab9-8482-dc376f1924cb-kube-api-access-cv94r\") pod \"dnsmasq-dns-666b6646f7-5wtns\" (UID: \"d732c2fd-5d00-4ab9-8482-dc376f1924cb\") " pod="openstack/dnsmasq-dns-666b6646f7-5wtns" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.140605 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d732c2fd-5d00-4ab9-8482-dc376f1924cb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5wtns\" (UID: \"d732c2fd-5d00-4ab9-8482-dc376f1924cb\") " pod="openstack/dnsmasq-dns-666b6646f7-5wtns" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.141514 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d732c2fd-5d00-4ab9-8482-dc376f1924cb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5wtns\" (UID: \"d732c2fd-5d00-4ab9-8482-dc376f1924cb\") " pod="openstack/dnsmasq-dns-666b6646f7-5wtns" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.142032 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d732c2fd-5d00-4ab9-8482-dc376f1924cb-config\") pod \"dnsmasq-dns-666b6646f7-5wtns\" (UID: \"d732c2fd-5d00-4ab9-8482-dc376f1924cb\") " pod="openstack/dnsmasq-dns-666b6646f7-5wtns" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.171697 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv94r\" (UniqueName: \"kubernetes.io/projected/d732c2fd-5d00-4ab9-8482-dc376f1924cb-kube-api-access-cv94r\") pod \"dnsmasq-dns-666b6646f7-5wtns\" (UID: \"d732c2fd-5d00-4ab9-8482-dc376f1924cb\") " pod="openstack/dnsmasq-dns-666b6646f7-5wtns" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.304062 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5wtns" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.308266 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6rd5t"] Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.372948 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sxfhw"] Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.374109 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.412849 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sxfhw"] Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.455756 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5pmn\" (UniqueName: \"kubernetes.io/projected/64c10d89-0a3a-4106-b34e-ff8252758f2c-kube-api-access-l5pmn\") pod \"dnsmasq-dns-57d769cc4f-sxfhw\" (UID: \"64c10d89-0a3a-4106-b34e-ff8252758f2c\") " pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.456800 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64c10d89-0a3a-4106-b34e-ff8252758f2c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-sxfhw\" (UID: \"64c10d89-0a3a-4106-b34e-ff8252758f2c\") " pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.457074 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c10d89-0a3a-4106-b34e-ff8252758f2c-config\") pod \"dnsmasq-dns-57d769cc4f-sxfhw\" (UID: \"64c10d89-0a3a-4106-b34e-ff8252758f2c\") " pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.559017 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5pmn\" (UniqueName: \"kubernetes.io/projected/64c10d89-0a3a-4106-b34e-ff8252758f2c-kube-api-access-l5pmn\") pod \"dnsmasq-dns-57d769cc4f-sxfhw\" (UID: \"64c10d89-0a3a-4106-b34e-ff8252758f2c\") " pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.559074 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64c10d89-0a3a-4106-b34e-ff8252758f2c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-sxfhw\" (UID: \"64c10d89-0a3a-4106-b34e-ff8252758f2c\") " pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.559123 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c10d89-0a3a-4106-b34e-ff8252758f2c-config\") pod \"dnsmasq-dns-57d769cc4f-sxfhw\" (UID: \"64c10d89-0a3a-4106-b34e-ff8252758f2c\") " pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.560231 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c10d89-0a3a-4106-b34e-ff8252758f2c-config\") pod \"dnsmasq-dns-57d769cc4f-sxfhw\" (UID: \"64c10d89-0a3a-4106-b34e-ff8252758f2c\") " pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.560735 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64c10d89-0a3a-4106-b34e-ff8252758f2c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-sxfhw\" (UID: \"64c10d89-0a3a-4106-b34e-ff8252758f2c\") " pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.595679 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5pmn\" (UniqueName: \"kubernetes.io/projected/64c10d89-0a3a-4106-b34e-ff8252758f2c-kube-api-access-l5pmn\") pod \"dnsmasq-dns-57d769cc4f-sxfhw\" (UID: \"64c10d89-0a3a-4106-b34e-ff8252758f2c\") " pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.702435 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" Jan 21 13:20:36 crc kubenswrapper[4765]: W0121 13:20:36.944657 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd732c2fd_5d00_4ab9_8482_dc376f1924cb.slice/crio-540aa33e1cc153fddf1f3974325d6d62daf3fbc97e2709502d25959bd7d34449 WatchSource:0}: Error finding container 540aa33e1cc153fddf1f3974325d6d62daf3fbc97e2709502d25959bd7d34449: Status 404 returned error can't find the container with id 540aa33e1cc153fddf1f3974325d6d62daf3fbc97e2709502d25959bd7d34449 Jan 21 13:20:36 crc kubenswrapper[4765]: I0121 13:20:36.954119 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5wtns"] Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.114881 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.116360 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.119421 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.119604 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.119751 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-t7g28" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.120129 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.122946 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.127397 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.127506 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.142401 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.173263 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.173304 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-config-data\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.173324 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.173356 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.175295 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.175348 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-server-conf\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.175403 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/054275fd-f5b9-4326-98a3-af2cc1d76c17-pod-info\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.175431 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.175500 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/054275fd-f5b9-4326-98a3-af2cc1d76c17-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.175568 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.175613 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9p49\" (UniqueName: \"kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-kube-api-access-f9p49\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.269183 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sxfhw"] Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.290732 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.290777 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-config-data\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.290801 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.290889 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.290932 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.290981 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-server-conf\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.291012 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/054275fd-f5b9-4326-98a3-af2cc1d76c17-pod-info\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.291050 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.291105 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/054275fd-f5b9-4326-98a3-af2cc1d76c17-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.291165 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.291265 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9p49\" (UniqueName: \"kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-kube-api-access-f9p49\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.293746 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-config-data\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.294199 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.294487 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.295104 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.295443 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-server-conf\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.300957 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.303163 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.304192 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/054275fd-f5b9-4326-98a3-af2cc1d76c17-pod-info\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.306785 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/054275fd-f5b9-4326-98a3-af2cc1d76c17-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.318645 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.347604 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9p49\" (UniqueName: \"kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-kube-api-access-f9p49\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.349020 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.378633 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5wtns" event={"ID":"d732c2fd-5d00-4ab9-8482-dc376f1924cb","Type":"ContainerStarted","Data":"540aa33e1cc153fddf1f3974325d6d62daf3fbc97e2709502d25959bd7d34449"} Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.450603 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.530520 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.537350 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.549331 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.550487 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nkftp" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.550728 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.550928 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.553252 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.553857 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.554693 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.555602 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.602259 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.602314 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d783178-0ea7-4643-802f-d56722e1df7d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.602410 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.602428 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.602450 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwpkl\" (UniqueName: \"kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-kube-api-access-xwpkl\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.602474 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.602505 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.606030 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d783178-0ea7-4643-802f-d56722e1df7d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.606117 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.606161 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.606307 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.708199 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d783178-0ea7-4643-802f-d56722e1df7d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.709985 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.710039 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.710078 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwpkl\" (UniqueName: \"kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-kube-api-access-xwpkl\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.710144 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.710465 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.710619 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d783178-0ea7-4643-802f-d56722e1df7d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.710700 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.710728 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.711006 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.711046 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.711219 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.711259 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.712240 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.712772 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.715641 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.716904 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.717752 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d783178-0ea7-4643-802f-d56722e1df7d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.744784 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwpkl\" (UniqueName: \"kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-kube-api-access-xwpkl\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.753857 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.754419 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d783178-0ea7-4643-802f-d56722e1df7d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.757122 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.759457 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:37 crc kubenswrapper[4765]: I0121 13:20:37.917340 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.223860 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:20:38 crc kubenswrapper[4765]: W0121 13:20:38.246397 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod054275fd_f5b9_4326_98a3_af2cc1d76c17.slice/crio-cecfb471a9be2aa3d7d4eb41b9fa91997f657a8a351cf92c0a3084ded3964424 WatchSource:0}: Error finding container cecfb471a9be2aa3d7d4eb41b9fa91997f657a8a351cf92c0a3084ded3964424: Status 404 returned error can't find the container with id cecfb471a9be2aa3d7d4eb41b9fa91997f657a8a351cf92c0a3084ded3964424 Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.404764 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" event={"ID":"64c10d89-0a3a-4106-b34e-ff8252758f2c","Type":"ContainerStarted","Data":"7e27cf707b47e5f0289584343409dcc9d3f52d68a60c561c788be967ded9492e"} Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.410616 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"054275fd-f5b9-4326-98a3-af2cc1d76c17","Type":"ContainerStarted","Data":"cecfb471a9be2aa3d7d4eb41b9fa91997f657a8a351cf92c0a3084ded3964424"} Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.485797 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.487090 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.491162 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.491541 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-64lfr" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.491646 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.491971 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.500859 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.512149 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.637663 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwc7v\" (UniqueName: \"kubernetes.io/projected/00d8ba34-9c69-4d77-a58a-e8202aa68b31-kube-api-access-nwc7v\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.637910 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/00d8ba34-9c69-4d77-a58a-e8202aa68b31-config-data-default\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.637994 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d8ba34-9c69-4d77-a58a-e8202aa68b31-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.638039 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/00d8ba34-9c69-4d77-a58a-e8202aa68b31-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.638093 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/00d8ba34-9c69-4d77-a58a-e8202aa68b31-config-data-generated\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.638328 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.638377 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00d8ba34-9c69-4d77-a58a-e8202aa68b31-operator-scripts\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.638423 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/00d8ba34-9c69-4d77-a58a-e8202aa68b31-kolla-config\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.644726 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:20:38 crc kubenswrapper[4765]: W0121 13:20:38.661030 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d783178_0ea7_4643_802f_d56722e1df7d.slice/crio-62b2916abb97801b71d2644df613491a0bc09cf00dd9659618717e94d7878084 WatchSource:0}: Error finding container 62b2916abb97801b71d2644df613491a0bc09cf00dd9659618717e94d7878084: Status 404 returned error can't find the container with id 62b2916abb97801b71d2644df613491a0bc09cf00dd9659618717e94d7878084 Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.741315 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.741375 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00d8ba34-9c69-4d77-a58a-e8202aa68b31-operator-scripts\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.741417 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/00d8ba34-9c69-4d77-a58a-e8202aa68b31-kolla-config\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.741495 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwc7v\" (UniqueName: \"kubernetes.io/projected/00d8ba34-9c69-4d77-a58a-e8202aa68b31-kube-api-access-nwc7v\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.741581 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/00d8ba34-9c69-4d77-a58a-e8202aa68b31-config-data-default\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.741655 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d8ba34-9c69-4d77-a58a-e8202aa68b31-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.741681 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/00d8ba34-9c69-4d77-a58a-e8202aa68b31-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.741761 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/00d8ba34-9c69-4d77-a58a-e8202aa68b31-config-data-generated\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.742304 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/00d8ba34-9c69-4d77-a58a-e8202aa68b31-kolla-config\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.742473 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.745454 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00d8ba34-9c69-4d77-a58a-e8202aa68b31-operator-scripts\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.745488 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/00d8ba34-9c69-4d77-a58a-e8202aa68b31-config-data-generated\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.746911 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/00d8ba34-9c69-4d77-a58a-e8202aa68b31-config-data-default\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.747257 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d8ba34-9c69-4d77-a58a-e8202aa68b31-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.750447 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/00d8ba34-9c69-4d77-a58a-e8202aa68b31-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.780595 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwc7v\" (UniqueName: \"kubernetes.io/projected/00d8ba34-9c69-4d77-a58a-e8202aa68b31-kube-api-access-nwc7v\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.782513 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"00d8ba34-9c69-4d77-a58a-e8202aa68b31\") " pod="openstack/openstack-galera-0" Jan 21 13:20:38 crc kubenswrapper[4765]: I0121 13:20:38.836551 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.444246 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d783178-0ea7-4643-802f-d56722e1df7d","Type":"ContainerStarted","Data":"62b2916abb97801b71d2644df613491a0bc09cf00dd9659618717e94d7878084"} Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.703140 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.842537 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.856129 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.862945 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-gd265" Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.863735 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.863874 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.867791 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.902271 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.973475 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hx8g\" (UniqueName: \"kubernetes.io/projected/cf0cab45-7e21-4b1e-a868-b19db9379c99-kube-api-access-8hx8g\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.973581 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf0cab45-7e21-4b1e-a868-b19db9379c99-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.973608 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cf0cab45-7e21-4b1e-a868-b19db9379c99-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.973696 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0cab45-7e21-4b1e-a868-b19db9379c99-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.973726 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cf0cab45-7e21-4b1e-a868-b19db9379c99-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.973749 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cf0cab45-7e21-4b1e-a868-b19db9379c99-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.973775 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:39 crc kubenswrapper[4765]: I0121 13:20:39.973804 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf0cab45-7e21-4b1e-a868-b19db9379c99-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.075755 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hx8g\" (UniqueName: \"kubernetes.io/projected/cf0cab45-7e21-4b1e-a868-b19db9379c99-kube-api-access-8hx8g\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.075822 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf0cab45-7e21-4b1e-a868-b19db9379c99-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.075844 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cf0cab45-7e21-4b1e-a868-b19db9379c99-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.075910 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0cab45-7e21-4b1e-a868-b19db9379c99-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.075949 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cf0cab45-7e21-4b1e-a868-b19db9379c99-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.075975 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cf0cab45-7e21-4b1e-a868-b19db9379c99-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.075996 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.076017 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf0cab45-7e21-4b1e-a868-b19db9379c99-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.082033 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cf0cab45-7e21-4b1e-a868-b19db9379c99-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.082828 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cf0cab45-7e21-4b1e-a868-b19db9379c99-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.083846 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.083931 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0cab45-7e21-4b1e-a868-b19db9379c99-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.084478 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cf0cab45-7e21-4b1e-a868-b19db9379c99-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.091265 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf0cab45-7e21-4b1e-a868-b19db9379c99-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.098470 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hx8g\" (UniqueName: \"kubernetes.io/projected/cf0cab45-7e21-4b1e-a868-b19db9379c99-kube-api-access-8hx8g\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.112096 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf0cab45-7e21-4b1e-a868-b19db9379c99-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.165568 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cf0cab45-7e21-4b1e-a868-b19db9379c99\") " pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.246110 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.360693 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.361849 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.367124 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.367337 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.370766 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-v84bw" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.406335 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.491197 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02d30b98-43d0-4b3f-82c0-64193524da98-config-data\") pod \"memcached-0\" (UID: \"02d30b98-43d0-4b3f-82c0-64193524da98\") " pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.491585 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/02d30b98-43d0-4b3f-82c0-64193524da98-kolla-config\") pod \"memcached-0\" (UID: \"02d30b98-43d0-4b3f-82c0-64193524da98\") " pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.491626 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d30b98-43d0-4b3f-82c0-64193524da98-memcached-tls-certs\") pod \"memcached-0\" (UID: \"02d30b98-43d0-4b3f-82c0-64193524da98\") " pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.491650 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lbvn\" (UniqueName: \"kubernetes.io/projected/02d30b98-43d0-4b3f-82c0-64193524da98-kube-api-access-2lbvn\") pod \"memcached-0\" (UID: \"02d30b98-43d0-4b3f-82c0-64193524da98\") " pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.491694 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d30b98-43d0-4b3f-82c0-64193524da98-combined-ca-bundle\") pod \"memcached-0\" (UID: \"02d30b98-43d0-4b3f-82c0-64193524da98\") " pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.605175 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d30b98-43d0-4b3f-82c0-64193524da98-combined-ca-bundle\") pod \"memcached-0\" (UID: \"02d30b98-43d0-4b3f-82c0-64193524da98\") " pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.605616 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02d30b98-43d0-4b3f-82c0-64193524da98-config-data\") pod \"memcached-0\" (UID: \"02d30b98-43d0-4b3f-82c0-64193524da98\") " pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.605751 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/02d30b98-43d0-4b3f-82c0-64193524da98-kolla-config\") pod \"memcached-0\" (UID: \"02d30b98-43d0-4b3f-82c0-64193524da98\") " pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.605857 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d30b98-43d0-4b3f-82c0-64193524da98-memcached-tls-certs\") pod \"memcached-0\" (UID: \"02d30b98-43d0-4b3f-82c0-64193524da98\") " pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.605900 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lbvn\" (UniqueName: \"kubernetes.io/projected/02d30b98-43d0-4b3f-82c0-64193524da98-kube-api-access-2lbvn\") pod \"memcached-0\" (UID: \"02d30b98-43d0-4b3f-82c0-64193524da98\") " pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.610872 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/02d30b98-43d0-4b3f-82c0-64193524da98-kolla-config\") pod \"memcached-0\" (UID: \"02d30b98-43d0-4b3f-82c0-64193524da98\") " pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.612252 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02d30b98-43d0-4b3f-82c0-64193524da98-config-data\") pod \"memcached-0\" (UID: \"02d30b98-43d0-4b3f-82c0-64193524da98\") " pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.649764 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"00d8ba34-9c69-4d77-a58a-e8202aa68b31","Type":"ContainerStarted","Data":"14f77a9fd75e0c89b457fc36a0168f451a4bda8fcebadecaf121c5fd75cda6fa"} Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.667964 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lbvn\" (UniqueName: \"kubernetes.io/projected/02d30b98-43d0-4b3f-82c0-64193524da98-kube-api-access-2lbvn\") pod \"memcached-0\" (UID: \"02d30b98-43d0-4b3f-82c0-64193524da98\") " pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.690578 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/02d30b98-43d0-4b3f-82c0-64193524da98-memcached-tls-certs\") pod \"memcached-0\" (UID: \"02d30b98-43d0-4b3f-82c0-64193524da98\") " pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.700558 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d30b98-43d0-4b3f-82c0-64193524da98-combined-ca-bundle\") pod \"memcached-0\" (UID: \"02d30b98-43d0-4b3f-82c0-64193524da98\") " pod="openstack/memcached-0" Jan 21 13:20:40 crc kubenswrapper[4765]: I0121 13:20:40.785403 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 13:20:41 crc kubenswrapper[4765]: I0121 13:20:41.465002 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 13:20:41 crc kubenswrapper[4765]: I0121 13:20:41.698894 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cf0cab45-7e21-4b1e-a868-b19db9379c99","Type":"ContainerStarted","Data":"a19941c80a4540b74aa22d1efda393b0117b78063a338ac65a093d6e45b3065a"} Jan 21 13:20:41 crc kubenswrapper[4765]: I0121 13:20:41.747340 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:20:41 crc kubenswrapper[4765]: I0121 13:20:41.748442 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 13:20:41 crc kubenswrapper[4765]: I0121 13:20:41.776597 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-ckvkq" Jan 21 13:20:41 crc kubenswrapper[4765]: I0121 13:20:41.782381 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:20:41 crc kubenswrapper[4765]: I0121 13:20:41.841286 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fz2n\" (UniqueName: \"kubernetes.io/projected/12b93916-e6dc-4aac-809e-0dfe1b11ed1a-kube-api-access-8fz2n\") pod \"kube-state-metrics-0\" (UID: \"12b93916-e6dc-4aac-809e-0dfe1b11ed1a\") " pod="openstack/kube-state-metrics-0" Jan 21 13:20:41 crc kubenswrapper[4765]: I0121 13:20:41.937880 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 13:20:41 crc kubenswrapper[4765]: I0121 13:20:41.943124 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fz2n\" (UniqueName: \"kubernetes.io/projected/12b93916-e6dc-4aac-809e-0dfe1b11ed1a-kube-api-access-8fz2n\") pod \"kube-state-metrics-0\" (UID: \"12b93916-e6dc-4aac-809e-0dfe1b11ed1a\") " pod="openstack/kube-state-metrics-0" Jan 21 13:20:42 crc kubenswrapper[4765]: I0121 13:20:42.015025 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fz2n\" (UniqueName: \"kubernetes.io/projected/12b93916-e6dc-4aac-809e-0dfe1b11ed1a-kube-api-access-8fz2n\") pod \"kube-state-metrics-0\" (UID: \"12b93916-e6dc-4aac-809e-0dfe1b11ed1a\") " pod="openstack/kube-state-metrics-0" Jan 21 13:20:42 crc kubenswrapper[4765]: I0121 13:20:42.096920 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 13:20:42 crc kubenswrapper[4765]: I0121 13:20:42.745420 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"02d30b98-43d0-4b3f-82c0-64193524da98","Type":"ContainerStarted","Data":"bd1fe4f27ea67b51e6409916edc239399d211a9f669933d96023a7475a8ca0cd"} Jan 21 13:20:42 crc kubenswrapper[4765]: I0121 13:20:42.828497 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:20:42 crc kubenswrapper[4765]: W0121 13:20:42.831883 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12b93916_e6dc_4aac_809e_0dfe1b11ed1a.slice/crio-d43c66752619e80bbff14c73b01198ef9c5cd208c687a948fc4f71184bf53d53 WatchSource:0}: Error finding container d43c66752619e80bbff14c73b01198ef9c5cd208c687a948fc4f71184bf53d53: Status 404 returned error can't find the container with id d43c66752619e80bbff14c73b01198ef9c5cd208c687a948fc4f71184bf53d53 Jan 21 13:20:43 crc kubenswrapper[4765]: I0121 13:20:43.795220 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"12b93916-e6dc-4aac-809e-0dfe1b11ed1a","Type":"ContainerStarted","Data":"d43c66752619e80bbff14c73b01198ef9c5cd208c687a948fc4f71184bf53d53"} Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.435272 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gkqpl"] Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.494632 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-64shj"] Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.494993 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.499083 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7cft2" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.501126 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.501895 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.502195 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.540759 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gkqpl"] Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.553568 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-64shj"] Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.679533 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0babea53-5832-46a5-a0e6-9fd9823cbbe9-var-lib\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.679582 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-var-run\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.679621 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44xd\" (UniqueName: \"kubernetes.io/projected/0babea53-5832-46a5-a0e6-9fd9823cbbe9-kube-api-access-x44xd\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.679658 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-combined-ca-bundle\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.679683 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0babea53-5832-46a5-a0e6-9fd9823cbbe9-var-log\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.679703 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-var-log-ovn\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.679723 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-ovn-controller-tls-certs\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.679739 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-var-run-ovn\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.679754 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0babea53-5832-46a5-a0e6-9fd9823cbbe9-scripts\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.679773 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-scripts\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.679789 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjwrk\" (UniqueName: \"kubernetes.io/projected/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-kube-api-access-qjwrk\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.679812 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0babea53-5832-46a5-a0e6-9fd9823cbbe9-etc-ovs\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.679831 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0babea53-5832-46a5-a0e6-9fd9823cbbe9-var-run\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.780891 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x44xd\" (UniqueName: \"kubernetes.io/projected/0babea53-5832-46a5-a0e6-9fd9823cbbe9-kube-api-access-x44xd\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.781012 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-combined-ca-bundle\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.781045 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0babea53-5832-46a5-a0e6-9fd9823cbbe9-var-log\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.781073 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-var-log-ovn\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.781097 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-ovn-controller-tls-certs\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.781118 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-var-run-ovn\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.781139 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0babea53-5832-46a5-a0e6-9fd9823cbbe9-scripts\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.781162 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-scripts\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.781179 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjwrk\" (UniqueName: \"kubernetes.io/projected/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-kube-api-access-qjwrk\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.781201 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0babea53-5832-46a5-a0e6-9fd9823cbbe9-etc-ovs\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.781246 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0babea53-5832-46a5-a0e6-9fd9823cbbe9-var-run\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.781372 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0babea53-5832-46a5-a0e6-9fd9823cbbe9-var-lib\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.781407 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-var-run\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.781871 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-var-run-ovn\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.781947 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-var-run\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.782140 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0babea53-5832-46a5-a0e6-9fd9823cbbe9-etc-ovs\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.782488 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0babea53-5832-46a5-a0e6-9fd9823cbbe9-var-log\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.782592 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0babea53-5832-46a5-a0e6-9fd9823cbbe9-var-lib\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.782629 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0babea53-5832-46a5-a0e6-9fd9823cbbe9-var-run\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.796454 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0babea53-5832-46a5-a0e6-9fd9823cbbe9-scripts\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.799072 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-scripts\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.799867 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-var-log-ovn\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.801016 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-combined-ca-bundle\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.802365 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x44xd\" (UniqueName: \"kubernetes.io/projected/0babea53-5832-46a5-a0e6-9fd9823cbbe9-kube-api-access-x44xd\") pod \"ovn-controller-ovs-64shj\" (UID: \"0babea53-5832-46a5-a0e6-9fd9823cbbe9\") " pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.803146 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjwrk\" (UniqueName: \"kubernetes.io/projected/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-kube-api-access-qjwrk\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.804627 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/acf0ca9c-abda-4c3b-98d3-ca3e6189434a-ovn-controller-tls-certs\") pod \"ovn-controller-gkqpl\" (UID: \"acf0ca9c-abda-4c3b-98d3-ca3e6189434a\") " pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.874113 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gkqpl" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.907493 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.963854 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.965685 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.969142 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.969183 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.969368 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.969551 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.970440 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-5cpfr" Jan 21 13:20:45 crc kubenswrapper[4765]: I0121 13:20:45.986298 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.086150 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.086268 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.086311 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.086338 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.086368 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ww6z\" (UniqueName: \"kubernetes.io/projected/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-kube-api-access-6ww6z\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.086427 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.086443 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.086471 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.188429 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.188504 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.188542 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.188571 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.188602 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ww6z\" (UniqueName: \"kubernetes.io/projected/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-kube-api-access-6ww6z\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.188663 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.188689 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.188725 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.189105 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.189961 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.191115 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.191154 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-config\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.210788 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.226069 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.290158 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.295195 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ww6z\" (UniqueName: \"kubernetes.io/projected/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-kube-api-access-6ww6z\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.301244 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3\") " pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:46 crc kubenswrapper[4765]: I0121 13:20:46.336607 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.564832 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.567545 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.596598 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.622574 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.625696 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.626082 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-f88bj" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.626230 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.675153 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1cf8f51-de39-4833-807f-f5ace97d9c30-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.675223 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1cf8f51-de39-4833-807f-f5ace97d9c30-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.675247 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5brj6\" (UniqueName: \"kubernetes.io/projected/f1cf8f51-de39-4833-807f-f5ace97d9c30-kube-api-access-5brj6\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.675288 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1cf8f51-de39-4833-807f-f5ace97d9c30-config\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.675335 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.675356 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1cf8f51-de39-4833-807f-f5ace97d9c30-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.675405 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cf8f51-de39-4833-807f-f5ace97d9c30-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.675470 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f1cf8f51-de39-4833-807f-f5ace97d9c30-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.777095 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cf8f51-de39-4833-807f-f5ace97d9c30-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.777163 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f1cf8f51-de39-4833-807f-f5ace97d9c30-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.777227 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1cf8f51-de39-4833-807f-f5ace97d9c30-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.777251 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1cf8f51-de39-4833-807f-f5ace97d9c30-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.777277 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5brj6\" (UniqueName: \"kubernetes.io/projected/f1cf8f51-de39-4833-807f-f5ace97d9c30-kube-api-access-5brj6\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.777324 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1cf8f51-de39-4833-807f-f5ace97d9c30-config\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.777379 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.777407 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1cf8f51-de39-4833-807f-f5ace97d9c30-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.779094 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1cf8f51-de39-4833-807f-f5ace97d9c30-config\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.779096 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.780073 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1cf8f51-de39-4833-807f-f5ace97d9c30-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.780988 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f1cf8f51-de39-4833-807f-f5ace97d9c30-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.787989 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1cf8f51-de39-4833-807f-f5ace97d9c30-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.797615 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cf8f51-de39-4833-807f-f5ace97d9c30-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.816465 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1cf8f51-de39-4833-807f-f5ace97d9c30-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.818253 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5brj6\" (UniqueName: \"kubernetes.io/projected/f1cf8f51-de39-4833-807f-f5ace97d9c30-kube-api-access-5brj6\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.863101 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1cf8f51-de39-4833-807f-f5ace97d9c30\") " pod="openstack/ovsdbserver-sb-0" Jan 21 13:20:49 crc kubenswrapper[4765]: I0121 13:20:49.964684 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 13:21:00 crc kubenswrapper[4765]: I0121 13:21:00.081261 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gkqpl"] Jan 21 13:21:04 crc kubenswrapper[4765]: E0121 13:21:04.211454 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 21 13:21:04 crc kubenswrapper[4765]: E0121 13:21:04.214567 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwpkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(4d783178-0ea7-4643-802f-d56722e1df7d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:21:04 crc kubenswrapper[4765]: E0121 13:21:04.215922 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="4d783178-0ea7-4643-802f-d56722e1df7d" Jan 21 13:21:04 crc kubenswrapper[4765]: E0121 13:21:04.219811 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 21 13:21:04 crc kubenswrapper[4765]: E0121 13:21:04.220063 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9p49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(054275fd-f5b9-4326-98a3-af2cc1d76c17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:21:04 crc kubenswrapper[4765]: E0121 13:21:04.221266 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="054275fd-f5b9-4326-98a3-af2cc1d76c17" Jan 21 13:21:05 crc kubenswrapper[4765]: E0121 13:21:05.039503 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 21 13:21:05 crc kubenswrapper[4765]: E0121 13:21:05.039816 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n66h58dh7fh64dh7bh565h68hf5h559h5cch698h5cch549h59fhd8hf8h54fh68bhfch68dh5dch9h6chf7hcdh5d6h76h7bh5dhfchcch54q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2lbvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(02d30b98-43d0-4b3f-82c0-64193524da98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:21:05 crc kubenswrapper[4765]: E0121 13:21:05.041074 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="02d30b98-43d0-4b3f-82c0-64193524da98" Jan 21 13:21:05 crc kubenswrapper[4765]: E0121 13:21:05.102272 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="4d783178-0ea7-4643-802f-d56722e1df7d" Jan 21 13:21:05 crc kubenswrapper[4765]: E0121 13:21:05.102917 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="02d30b98-43d0-4b3f-82c0-64193524da98" Jan 21 13:21:05 crc kubenswrapper[4765]: E0121 13:21:05.105660 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="054275fd-f5b9-4326-98a3-af2cc1d76c17" Jan 21 13:21:08 crc kubenswrapper[4765]: I0121 13:21:08.126712 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gkqpl" event={"ID":"acf0ca9c-abda-4c3b-98d3-ca3e6189434a","Type":"ContainerStarted","Data":"60d12977312fa094e4327ce273fa81c849a9db6f6fd12d75e356960faf393b82"} Jan 21 13:21:13 crc kubenswrapper[4765]: I0121 13:21:13.058646 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-64shj"] Jan 21 13:21:13 crc kubenswrapper[4765]: E0121 13:21:13.511924 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 13:21:13 crc kubenswrapper[4765]: E0121 13:21:13.512416 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5zvtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-6rd5t_openstack(9b9057ca-badb-4b3d-95e9-ec32dcd52f2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:21:13 crc kubenswrapper[4765]: E0121 13:21:13.513779 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" podUID="9b9057ca-badb-4b3d-95e9-ec32dcd52f2e" Jan 21 13:21:13 crc kubenswrapper[4765]: E0121 13:21:13.587280 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 13:21:13 crc kubenswrapper[4765]: E0121 13:21:13.587460 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpmdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-wj7pv_openstack(29c45b3c-a3c8-4480-895c-a86ec81ede26): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:21:13 crc kubenswrapper[4765]: E0121 13:21:13.588613 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-wj7pv" podUID="29c45b3c-a3c8-4480-895c-a86ec81ede26" Jan 21 13:21:13 crc kubenswrapper[4765]: E0121 13:21:13.758123 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 13:21:13 crc kubenswrapper[4765]: E0121 13:21:13.758604 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cv94r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-5wtns_openstack(d732c2fd-5d00-4ab9-8482-dc376f1924cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:21:13 crc kubenswrapper[4765]: E0121 13:21:13.759740 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-5wtns" podUID="d732c2fd-5d00-4ab9-8482-dc376f1924cb" Jan 21 13:21:13 crc kubenswrapper[4765]: E0121 13:21:13.776443 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 13:21:13 crc kubenswrapper[4765]: E0121 13:21:13.776631 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5pmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-sxfhw_openstack(64c10d89-0a3a-4106-b34e-ff8252758f2c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:21:13 crc kubenswrapper[4765]: E0121 13:21:13.779032 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" podUID="64c10d89-0a3a-4106-b34e-ff8252758f2c" Jan 21 13:21:14 crc kubenswrapper[4765]: I0121 13:21:14.136011 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 13:21:14 crc kubenswrapper[4765]: I0121 13:21:14.180277 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-64shj" event={"ID":"0babea53-5832-46a5-a0e6-9fd9823cbbe9","Type":"ContainerStarted","Data":"67e705a58d2de3c3ab72dc4537974ca06cba28d4aacdf5b57c125ff8d4718d07"} Jan 21 13:21:14 crc kubenswrapper[4765]: E0121 13:21:14.182195 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" podUID="64c10d89-0a3a-4106-b34e-ff8252758f2c" Jan 21 13:21:14 crc kubenswrapper[4765]: E0121 13:21:14.182668 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-5wtns" podUID="d732c2fd-5d00-4ab9-8482-dc376f1924cb" Jan 21 13:21:14 crc kubenswrapper[4765]: E0121 13:21:14.853745 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 13:21:14 crc kubenswrapper[4765]: E0121 13:21:14.854059 4765 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 13:21:14 crc kubenswrapper[4765]: E0121 13:21:14.854186 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8fz2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(12b93916-e6dc-4aac-809e-0dfe1b11ed1a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:21:14 crc kubenswrapper[4765]: E0121 13:21:14.855924 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="12b93916-e6dc-4aac-809e-0dfe1b11ed1a" Jan 21 13:21:14 crc kubenswrapper[4765]: I0121 13:21:14.971350 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wj7pv" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.001096 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.167269 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c45b3c-a3c8-4480-895c-a86ec81ede26-config\") pod \"29c45b3c-a3c8-4480-895c-a86ec81ede26\" (UID: \"29c45b3c-a3c8-4480-895c-a86ec81ede26\") " Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.167320 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zvtr\" (UniqueName: \"kubernetes.io/projected/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-kube-api-access-5zvtr\") pod \"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e\" (UID: \"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e\") " Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.167351 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpmdc\" (UniqueName: \"kubernetes.io/projected/29c45b3c-a3c8-4480-895c-a86ec81ede26-kube-api-access-mpmdc\") pod \"29c45b3c-a3c8-4480-895c-a86ec81ede26\" (UID: \"29c45b3c-a3c8-4480-895c-a86ec81ede26\") " Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.167370 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-config\") pod \"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e\" (UID: \"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e\") " Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.167392 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-dns-svc\") pod \"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e\" (UID: \"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e\") " Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.168085 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b9057ca-badb-4b3d-95e9-ec32dcd52f2e" (UID: "9b9057ca-badb-4b3d-95e9-ec32dcd52f2e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.168615 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29c45b3c-a3c8-4480-895c-a86ec81ede26-config" (OuterVolumeSpecName: "config") pod "29c45b3c-a3c8-4480-895c-a86ec81ede26" (UID: "29c45b3c-a3c8-4480-895c-a86ec81ede26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.169425 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 13:21:15 crc kubenswrapper[4765]: W0121 13:21:15.170121 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d40d1a6_1e7f_4643_82cf_ec7dfcfbf6d3.slice/crio-db3734b519ef66fdd01b7775c45d0dc079442be49f6b150f4dd410ff15702c68 WatchSource:0}: Error finding container db3734b519ef66fdd01b7775c45d0dc079442be49f6b150f4dd410ff15702c68: Status 404 returned error can't find the container with id db3734b519ef66fdd01b7775c45d0dc079442be49f6b150f4dd410ff15702c68 Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.176559 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-config" (OuterVolumeSpecName: "config") pod "9b9057ca-badb-4b3d-95e9-ec32dcd52f2e" (UID: "9b9057ca-badb-4b3d-95e9-ec32dcd52f2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.178554 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-kube-api-access-5zvtr" (OuterVolumeSpecName: "kube-api-access-5zvtr") pod "9b9057ca-badb-4b3d-95e9-ec32dcd52f2e" (UID: "9b9057ca-badb-4b3d-95e9-ec32dcd52f2e"). InnerVolumeSpecName "kube-api-access-5zvtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.178715 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c45b3c-a3c8-4480-895c-a86ec81ede26-kube-api-access-mpmdc" (OuterVolumeSpecName: "kube-api-access-mpmdc") pod "29c45b3c-a3c8-4480-895c-a86ec81ede26" (UID: "29c45b3c-a3c8-4480-895c-a86ec81ede26"). InnerVolumeSpecName "kube-api-access-mpmdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.206958 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" event={"ID":"9b9057ca-badb-4b3d-95e9-ec32dcd52f2e","Type":"ContainerDied","Data":"5de449948007eac2577df71c51baa6e28c6a8606cb12fe0a64ff1565f6760049"} Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.207044 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6rd5t" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.314731 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c45b3c-a3c8-4480-895c-a86ec81ede26-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.314781 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zvtr\" (UniqueName: \"kubernetes.io/projected/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-kube-api-access-5zvtr\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.314810 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpmdc\" (UniqueName: \"kubernetes.io/projected/29c45b3c-a3c8-4480-895c-a86ec81ede26-kube-api-access-mpmdc\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.314824 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.314838 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.317492 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-wj7pv" event={"ID":"29c45b3c-a3c8-4480-895c-a86ec81ede26","Type":"ContainerDied","Data":"25d00f1a25629548f8b4e0fc0e9fc8ffd875b229a98b863ba04c7997e0e058cc"} Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.317592 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wj7pv" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.339292 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3","Type":"ContainerStarted","Data":"db3734b519ef66fdd01b7775c45d0dc079442be49f6b150f4dd410ff15702c68"} Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.345802 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f1cf8f51-de39-4833-807f-f5ace97d9c30","Type":"ContainerStarted","Data":"4ee951043f8e8dbcdfe547d11ce5b1f6063130c6ab28fb93986260072672150c"} Jan 21 13:21:15 crc kubenswrapper[4765]: E0121 13:21:15.347342 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="12b93916-e6dc-4aac-809e-0dfe1b11ed1a" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.526253 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wj7pv"] Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.536681 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wj7pv"] Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.578759 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6rd5t"] Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.590646 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6rd5t"] Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.628433 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c45b3c-a3c8-4480-895c-a86ec81ede26" path="/var/lib/kubelet/pods/29c45b3c-a3c8-4480-895c-a86ec81ede26/volumes" Jan 21 13:21:15 crc kubenswrapper[4765]: I0121 13:21:15.629202 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9057ca-badb-4b3d-95e9-ec32dcd52f2e" path="/var/lib/kubelet/pods/9b9057ca-badb-4b3d-95e9-ec32dcd52f2e/volumes" Jan 21 13:21:16 crc kubenswrapper[4765]: I0121 13:21:16.352262 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"00d8ba34-9c69-4d77-a58a-e8202aa68b31","Type":"ContainerStarted","Data":"b61accfc26be19c6198603b46df30dab7d6d786704671c490846bf146345f24b"} Jan 21 13:21:16 crc kubenswrapper[4765]: I0121 13:21:16.354270 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cf0cab45-7e21-4b1e-a868-b19db9379c99","Type":"ContainerStarted","Data":"6664a667c664a4353c09fb1ecea43d33748eba84551d23d45091ece8e6a613a2"} Jan 21 13:21:25 crc kubenswrapper[4765]: I0121 13:21:25.458881 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-64shj" event={"ID":"0babea53-5832-46a5-a0e6-9fd9823cbbe9","Type":"ContainerStarted","Data":"a3096cc7a21083a755740ef9bedbdb03562ef5c4a96b5255df4352630ef7904c"} Jan 21 13:21:25 crc kubenswrapper[4765]: I0121 13:21:25.461451 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"02d30b98-43d0-4b3f-82c0-64193524da98","Type":"ContainerStarted","Data":"3a5d2688e7b4467c62820ee91c5234a2b91b689d61180be100cf6e174d83eeab"} Jan 21 13:21:25 crc kubenswrapper[4765]: I0121 13:21:25.462061 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 13:21:25 crc kubenswrapper[4765]: I0121 13:21:25.463782 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gkqpl" event={"ID":"acf0ca9c-abda-4c3b-98d3-ca3e6189434a","Type":"ContainerStarted","Data":"370f704e0b3c4b732776660dae90df584133ec1abe5ea3b0c702a197c51e2e68"} Jan 21 13:21:25 crc kubenswrapper[4765]: I0121 13:21:25.463983 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-gkqpl" Jan 21 13:21:25 crc kubenswrapper[4765]: I0121 13:21:25.465571 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3","Type":"ContainerStarted","Data":"b4afef6854a243d0c824dad4c528b679eae7655a3b95b790e3f7a930ab5ac81a"} Jan 21 13:21:25 crc kubenswrapper[4765]: I0121 13:21:25.467076 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f1cf8f51-de39-4833-807f-f5ace97d9c30","Type":"ContainerStarted","Data":"54b5fcafba33f67749f64c6afaeeb5087d95c6757a38ac23ea8ed61d3c48601a"} Jan 21 13:21:25 crc kubenswrapper[4765]: I0121 13:21:25.518043 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gkqpl" podStartSLOduration=22.767625318 podStartE2EDuration="40.518023019s" podCreationTimestamp="2026-01-21 13:20:45 +0000 UTC" firstStartedPulling="2026-01-21 13:21:07.13679515 +0000 UTC m=+1128.154520972" lastFinishedPulling="2026-01-21 13:21:24.887192851 +0000 UTC m=+1145.904918673" observedRunningTime="2026-01-21 13:21:25.508105198 +0000 UTC m=+1146.525831020" watchObservedRunningTime="2026-01-21 13:21:25.518023019 +0000 UTC m=+1146.535748841" Jan 21 13:21:25 crc kubenswrapper[4765]: I0121 13:21:25.657358 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.7889024449999997 podStartE2EDuration="45.657336381s" podCreationTimestamp="2026-01-21 13:20:40 +0000 UTC" firstStartedPulling="2026-01-21 13:20:42.028249984 +0000 UTC m=+1103.045975806" lastFinishedPulling="2026-01-21 13:21:24.89668392 +0000 UTC m=+1145.914409742" observedRunningTime="2026-01-21 13:21:25.536929855 +0000 UTC m=+1146.554655677" watchObservedRunningTime="2026-01-21 13:21:25.657336381 +0000 UTC m=+1146.675062203" Jan 21 13:21:26 crc kubenswrapper[4765]: I0121 13:21:26.479756 4765 generic.go:334] "Generic (PLEG): container finished" podID="0babea53-5832-46a5-a0e6-9fd9823cbbe9" containerID="a3096cc7a21083a755740ef9bedbdb03562ef5c4a96b5255df4352630ef7904c" exitCode=0 Jan 21 13:21:26 crc kubenswrapper[4765]: I0121 13:21:26.479952 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-64shj" event={"ID":"0babea53-5832-46a5-a0e6-9fd9823cbbe9","Type":"ContainerDied","Data":"a3096cc7a21083a755740ef9bedbdb03562ef5c4a96b5255df4352630ef7904c"} Jan 21 13:21:27 crc kubenswrapper[4765]: E0121 13:21:27.059496 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf0cab45_7e21_4b1e_a868_b19db9379c99.slice/crio-6664a667c664a4353c09fb1ecea43d33748eba84551d23d45091ece8e6a613a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf0cab45_7e21_4b1e_a868_b19db9379c99.slice/crio-conmon-6664a667c664a4353c09fb1ecea43d33748eba84551d23d45091ece8e6a613a2.scope\": RecentStats: unable to find data in memory cache]" Jan 21 13:21:27 crc kubenswrapper[4765]: I0121 13:21:27.496811 4765 generic.go:334] "Generic (PLEG): container finished" podID="cf0cab45-7e21-4b1e-a868-b19db9379c99" containerID="6664a667c664a4353c09fb1ecea43d33748eba84551d23d45091ece8e6a613a2" exitCode=0 Jan 21 13:21:27 crc kubenswrapper[4765]: I0121 13:21:27.496896 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cf0cab45-7e21-4b1e-a868-b19db9379c99","Type":"ContainerDied","Data":"6664a667c664a4353c09fb1ecea43d33748eba84551d23d45091ece8e6a613a2"} Jan 21 13:21:27 crc kubenswrapper[4765]: I0121 13:21:27.504637 4765 generic.go:334] "Generic (PLEG): container finished" podID="64c10d89-0a3a-4106-b34e-ff8252758f2c" containerID="cd1307237e1ca30def3b3c6d6584a9203ed9979625d371b4a8401c42ab2fce76" exitCode=0 Jan 21 13:21:27 crc kubenswrapper[4765]: I0121 13:21:27.504771 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" event={"ID":"64c10d89-0a3a-4106-b34e-ff8252758f2c","Type":"ContainerDied","Data":"cd1307237e1ca30def3b3c6d6584a9203ed9979625d371b4a8401c42ab2fce76"} Jan 21 13:21:27 crc kubenswrapper[4765]: I0121 13:21:27.551000 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-64shj" event={"ID":"0babea53-5832-46a5-a0e6-9fd9823cbbe9","Type":"ContainerStarted","Data":"35902d330cb7001848ce1a446e4f0c0cc60b41338578d9b0368c750d73043867"} Jan 21 13:21:27 crc kubenswrapper[4765]: I0121 13:21:27.551066 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-64shj" event={"ID":"0babea53-5832-46a5-a0e6-9fd9823cbbe9","Type":"ContainerStarted","Data":"9e96b5cf099bfa2a3676fc5952e5906cb617b5f7475f71c9392b6e2c3bf3bf0a"} Jan 21 13:21:27 crc kubenswrapper[4765]: I0121 13:21:27.551098 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:21:27 crc kubenswrapper[4765]: I0121 13:21:27.551116 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:21:27 crc kubenswrapper[4765]: I0121 13:21:27.567161 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d783178-0ea7-4643-802f-d56722e1df7d","Type":"ContainerStarted","Data":"4616ef97539fc8112f0373c108ede44e8bc6f6f97bc36b1ff01a83991a083f75"} Jan 21 13:21:27 crc kubenswrapper[4765]: I0121 13:21:27.582390 4765 generic.go:334] "Generic (PLEG): container finished" podID="00d8ba34-9c69-4d77-a58a-e8202aa68b31" containerID="b61accfc26be19c6198603b46df30dab7d6d786704671c490846bf146345f24b" exitCode=0 Jan 21 13:21:27 crc kubenswrapper[4765]: I0121 13:21:27.582910 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"00d8ba34-9c69-4d77-a58a-e8202aa68b31","Type":"ContainerDied","Data":"b61accfc26be19c6198603b46df30dab7d6d786704671c490846bf146345f24b"} Jan 21 13:21:27 crc kubenswrapper[4765]: I0121 13:21:27.636815 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"054275fd-f5b9-4326-98a3-af2cc1d76c17","Type":"ContainerStarted","Data":"7dcc51364c36973f1ebc49e3c990ab016165b1bb8ac45a8169fac12e8e7360f4"} Jan 21 13:21:27 crc kubenswrapper[4765]: I0121 13:21:27.719628 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-64shj" podStartSLOduration=31.317953497 podStartE2EDuration="42.719600871s" podCreationTimestamp="2026-01-21 13:20:45 +0000 UTC" firstStartedPulling="2026-01-21 13:21:13.495830619 +0000 UTC m=+1134.513556441" lastFinishedPulling="2026-01-21 13:21:24.897478003 +0000 UTC m=+1145.915203815" observedRunningTime="2026-01-21 13:21:27.713822181 +0000 UTC m=+1148.731548003" watchObservedRunningTime="2026-01-21 13:21:27.719600871 +0000 UTC m=+1148.737326693" Jan 21 13:21:28 crc kubenswrapper[4765]: I0121 13:21:28.638904 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cf0cab45-7e21-4b1e-a868-b19db9379c99","Type":"ContainerStarted","Data":"2881ba440edade62cca6aedd2c30e11b982e671adc13d4fc1ff04084254a19d5"} Jan 21 13:21:28 crc kubenswrapper[4765]: I0121 13:21:28.643285 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" event={"ID":"64c10d89-0a3a-4106-b34e-ff8252758f2c","Type":"ContainerStarted","Data":"9d8f4041c937926a99cbe07a54477a15a1d91219fb87e79d6b8417069219140f"} Jan 21 13:21:28 crc kubenswrapper[4765]: I0121 13:21:28.644364 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" Jan 21 13:21:28 crc kubenswrapper[4765]: I0121 13:21:28.649366 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"00d8ba34-9c69-4d77-a58a-e8202aa68b31","Type":"ContainerStarted","Data":"8c1c7cada196622fd614d5aa103c688173902ed540bcf15b1c71da1087413514"} Jan 21 13:21:28 crc kubenswrapper[4765]: I0121 13:21:28.679014 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=18.438689138 podStartE2EDuration="50.678990969s" podCreationTimestamp="2026-01-21 13:20:38 +0000 UTC" firstStartedPulling="2026-01-21 13:20:41.535053708 +0000 UTC m=+1102.552779530" lastFinishedPulling="2026-01-21 13:21:13.775355539 +0000 UTC m=+1134.793081361" observedRunningTime="2026-01-21 13:21:28.664989848 +0000 UTC m=+1149.682715680" watchObservedRunningTime="2026-01-21 13:21:28.678990969 +0000 UTC m=+1149.696716811" Jan 21 13:21:28 crc kubenswrapper[4765]: I0121 13:21:28.696376 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" podStartSLOduration=3.812173147 podStartE2EDuration="52.696347569s" podCreationTimestamp="2026-01-21 13:20:36 +0000 UTC" firstStartedPulling="2026-01-21 13:20:37.366228538 +0000 UTC m=+1098.383954360" lastFinishedPulling="2026-01-21 13:21:26.25040296 +0000 UTC m=+1147.268128782" observedRunningTime="2026-01-21 13:21:28.689803817 +0000 UTC m=+1149.707529639" watchObservedRunningTime="2026-01-21 13:21:28.696347569 +0000 UTC m=+1149.714073391" Jan 21 13:21:28 crc kubenswrapper[4765]: I0121 13:21:28.723440 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=17.748672232 podStartE2EDuration="51.723414944s" podCreationTimestamp="2026-01-21 13:20:37 +0000 UTC" firstStartedPulling="2026-01-21 13:20:39.800110272 +0000 UTC m=+1100.817836084" lastFinishedPulling="2026-01-21 13:21:13.774852964 +0000 UTC m=+1134.792578796" observedRunningTime="2026-01-21 13:21:28.712366869 +0000 UTC m=+1149.730092681" watchObservedRunningTime="2026-01-21 13:21:28.723414944 +0000 UTC m=+1149.741140766" Jan 21 13:21:28 crc kubenswrapper[4765]: I0121 13:21:28.837369 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 13:21:28 crc kubenswrapper[4765]: I0121 13:21:28.837430 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 13:21:29 crc kubenswrapper[4765]: E0121 13:21:29.284352 4765 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.144:60848->38.129.56.144:41999: write tcp 192.168.126.11:10250->192.168.126.11:53076: write: broken pipe Jan 21 13:21:30 crc kubenswrapper[4765]: I0121 13:21:30.246369 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 13:21:30 crc kubenswrapper[4765]: I0121 13:21:30.246762 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 13:21:30 crc kubenswrapper[4765]: I0121 13:21:30.665023 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f1cf8f51-de39-4833-807f-f5ace97d9c30","Type":"ContainerStarted","Data":"a22e41ef79ad2270fb8fe371c9c3085bcfe9546991949711d68051c3e050ccd3"} Jan 21 13:21:30 crc kubenswrapper[4765]: I0121 13:21:30.670045 4765 generic.go:334] "Generic (PLEG): container finished" podID="d732c2fd-5d00-4ab9-8482-dc376f1924cb" containerID="36fea743941dfb32b06b76007e5e0dd40fa40ab9185ca9c7c4b9909bdb666568" exitCode=0 Jan 21 13:21:30 crc kubenswrapper[4765]: I0121 13:21:30.670138 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5wtns" event={"ID":"d732c2fd-5d00-4ab9-8482-dc376f1924cb","Type":"ContainerDied","Data":"36fea743941dfb32b06b76007e5e0dd40fa40ab9185ca9c7c4b9909bdb666568"} Jan 21 13:21:30 crc kubenswrapper[4765]: I0121 13:21:30.671985 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"12b93916-e6dc-4aac-809e-0dfe1b11ed1a","Type":"ContainerStarted","Data":"a3f1da0a157762cc91be03569cf42b0a44b62a1c1885eec7a53e04727abf2412"} Jan 21 13:21:30 crc kubenswrapper[4765]: I0121 13:21:30.673280 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 13:21:30 crc kubenswrapper[4765]: I0121 13:21:30.680895 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3","Type":"ContainerStarted","Data":"8f004218f8608eac10fcf1fde5e350108256149626bfce7233520392696ec7b0"} Jan 21 13:21:30 crc kubenswrapper[4765]: I0121 13:21:30.743796 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=27.685652647 podStartE2EDuration="42.743775794s" podCreationTimestamp="2026-01-21 13:20:48 +0000 UTC" firstStartedPulling="2026-01-21 13:21:14.846014776 +0000 UTC m=+1135.863740588" lastFinishedPulling="2026-01-21 13:21:29.904137913 +0000 UTC m=+1150.921863735" observedRunningTime="2026-01-21 13:21:30.712887906 +0000 UTC m=+1151.730613728" watchObservedRunningTime="2026-01-21 13:21:30.743775794 +0000 UTC m=+1151.761501616" Jan 21 13:21:30 crc kubenswrapper[4765]: I0121 13:21:30.766593 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.530875377 podStartE2EDuration="49.766570983s" podCreationTimestamp="2026-01-21 13:20:41 +0000 UTC" firstStartedPulling="2026-01-21 13:20:42.844998172 +0000 UTC m=+1103.862723994" lastFinishedPulling="2026-01-21 13:21:30.080693788 +0000 UTC m=+1151.098419600" observedRunningTime="2026-01-21 13:21:30.74162352 +0000 UTC m=+1151.759349342" watchObservedRunningTime="2026-01-21 13:21:30.766570983 +0000 UTC m=+1151.784296805" Jan 21 13:21:30 crc kubenswrapper[4765]: I0121 13:21:30.789386 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 13:21:30 crc kubenswrapper[4765]: I0121 13:21:30.816186 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=32.111161014 podStartE2EDuration="46.81616457s" podCreationTimestamp="2026-01-21 13:20:44 +0000 UTC" firstStartedPulling="2026-01-21 13:21:15.181358535 +0000 UTC m=+1136.199084357" lastFinishedPulling="2026-01-21 13:21:29.886362091 +0000 UTC m=+1150.904087913" observedRunningTime="2026-01-21 13:21:30.784402787 +0000 UTC m=+1151.802128609" watchObservedRunningTime="2026-01-21 13:21:30.81616457 +0000 UTC m=+1151.833890412" Jan 21 13:21:31 crc kubenswrapper[4765]: I0121 13:21:31.336892 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 13:21:31 crc kubenswrapper[4765]: I0121 13:21:31.336932 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 13:21:31 crc kubenswrapper[4765]: I0121 13:21:31.378878 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 13:21:31 crc kubenswrapper[4765]: I0121 13:21:31.691237 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5wtns" event={"ID":"d732c2fd-5d00-4ab9-8482-dc376f1924cb","Type":"ContainerStarted","Data":"28ce6355dc82cff39d2b6c49e13f5caa4dbecebe7a60e0d3132cd2a20d4f9593"} Jan 21 13:21:31 crc kubenswrapper[4765]: I0121 13:21:31.692976 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-5wtns" Jan 21 13:21:31 crc kubenswrapper[4765]: I0121 13:21:31.716623 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-5wtns" podStartSLOduration=-9223371980.138176 podStartE2EDuration="56.716599705s" podCreationTimestamp="2026-01-21 13:20:35 +0000 UTC" firstStartedPulling="2026-01-21 13:20:36.957191474 +0000 UTC m=+1097.974917296" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:21:31.712563817 +0000 UTC m=+1152.730289649" watchObservedRunningTime="2026-01-21 13:21:31.716599705 +0000 UTC m=+1152.734325527" Jan 21 13:21:31 crc kubenswrapper[4765]: I0121 13:21:31.733709 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 13:21:31 crc kubenswrapper[4765]: I0121 13:21:31.965913 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.061572 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.072488 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5wtns"] Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.140925 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-slv44"] Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.143244 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.146176 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.170173 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-slv44"] Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.189565 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-zmx6x"] Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.191042 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.193519 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.209422 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zmx6x"] Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.288280 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2c7cc04a-963e-42e5-82ca-674e3e576a27-ovn-rundir\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.288354 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-slv44\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.288453 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-config\") pod \"dnsmasq-dns-5bf47b49b7-slv44\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.288487 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c7cc04a-963e-42e5-82ca-674e3e576a27-config\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.288526 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2c7cc04a-963e-42e5-82ca-674e3e576a27-ovs-rundir\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.288548 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgckv\" (UniqueName: \"kubernetes.io/projected/2c7cc04a-963e-42e5-82ca-674e3e576a27-kube-api-access-hgckv\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.288856 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hjmj\" (UniqueName: \"kubernetes.io/projected/4d319df9-fb42-4085-8f96-4fd671ee4ac1-kube-api-access-9hjmj\") pod \"dnsmasq-dns-5bf47b49b7-slv44\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.289090 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7cc04a-963e-42e5-82ca-674e3e576a27-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.289158 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7cc04a-963e-42e5-82ca-674e3e576a27-combined-ca-bundle\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.289200 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-slv44\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.391521 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-slv44\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.391604 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-config\") pod \"dnsmasq-dns-5bf47b49b7-slv44\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.391634 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c7cc04a-963e-42e5-82ca-674e3e576a27-config\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.391670 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2c7cc04a-963e-42e5-82ca-674e3e576a27-ovs-rundir\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.391706 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgckv\" (UniqueName: \"kubernetes.io/projected/2c7cc04a-963e-42e5-82ca-674e3e576a27-kube-api-access-hgckv\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.391752 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hjmj\" (UniqueName: \"kubernetes.io/projected/4d319df9-fb42-4085-8f96-4fd671ee4ac1-kube-api-access-9hjmj\") pod \"dnsmasq-dns-5bf47b49b7-slv44\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.391798 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7cc04a-963e-42e5-82ca-674e3e576a27-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.391822 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7cc04a-963e-42e5-82ca-674e3e576a27-combined-ca-bundle\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.391848 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-slv44\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.391920 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2c7cc04a-963e-42e5-82ca-674e3e576a27-ovn-rundir\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.392154 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2c7cc04a-963e-42e5-82ca-674e3e576a27-ovs-rundir\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.392300 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2c7cc04a-963e-42e5-82ca-674e3e576a27-ovn-rundir\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.393202 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-slv44\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.393406 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c7cc04a-963e-42e5-82ca-674e3e576a27-config\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.394055 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-config\") pod \"dnsmasq-dns-5bf47b49b7-slv44\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.394342 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-slv44\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.396846 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c7cc04a-963e-42e5-82ca-674e3e576a27-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.427339 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7cc04a-963e-42e5-82ca-674e3e576a27-combined-ca-bundle\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.442014 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hjmj\" (UniqueName: \"kubernetes.io/projected/4d319df9-fb42-4085-8f96-4fd671ee4ac1-kube-api-access-9hjmj\") pod \"dnsmasq-dns-5bf47b49b7-slv44\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.464799 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.465949 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgckv\" (UniqueName: \"kubernetes.io/projected/2c7cc04a-963e-42e5-82ca-674e3e576a27-kube-api-access-hgckv\") pod \"ovn-controller-metrics-zmx6x\" (UID: \"2c7cc04a-963e-42e5-82ca-674e3e576a27\") " pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.482932 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sxfhw"] Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.483220 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" podUID="64c10d89-0a3a-4106-b34e-ff8252758f2c" containerName="dnsmasq-dns" containerID="cri-o://9d8f4041c937926a99cbe07a54477a15a1d91219fb87e79d6b8417069219140f" gracePeriod=10 Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.492389 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.563999 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zmx6x" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.594768 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-jbggl"] Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.596556 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.678226 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-jbggl"] Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.696297 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-jbggl\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.696412 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-config\") pod \"dnsmasq-dns-57d65f699f-jbggl\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.696454 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-dns-svc\") pod \"dnsmasq-dns-57d65f699f-jbggl\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.696525 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbb9q\" (UniqueName: \"kubernetes.io/projected/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-kube-api-access-hbb9q\") pod \"dnsmasq-dns-57d65f699f-jbggl\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.714698 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.796440 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.803294 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-config\") pod \"dnsmasq-dns-57d65f699f-jbggl\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.803429 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-dns-svc\") pod \"dnsmasq-dns-57d65f699f-jbggl\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.803592 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbb9q\" (UniqueName: \"kubernetes.io/projected/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-kube-api-access-hbb9q\") pod \"dnsmasq-dns-57d65f699f-jbggl\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.803672 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-jbggl\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.804604 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-jbggl\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.805356 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-dns-svc\") pod \"dnsmasq-dns-57d65f699f-jbggl\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.818298 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-config\") pod \"dnsmasq-dns-57d65f699f-jbggl\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.858037 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-jbggl"] Jan 21 13:21:32 crc kubenswrapper[4765]: E0121 13:21:32.858917 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-hbb9q], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-57d65f699f-jbggl" podUID="73a1bfb6-fd7c-4dd0-8a8c-582977ed2571" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.902055 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbb9q\" (UniqueName: \"kubernetes.io/projected/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-kube-api-access-hbb9q\") pod \"dnsmasq-dns-57d65f699f-jbggl\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.987629 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-2bt8n"] Jan 21 13:21:32 crc kubenswrapper[4765]: I0121 13:21:32.994758 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.061729 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-2bt8n"] Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.079928 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.222275 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-2bt8n\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.222355 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-2bt8n\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.222425 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-config\") pod \"dnsmasq-dns-b8fbc5445-2bt8n\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.222478 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-2bt8n\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.222501 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mqzc\" (UniqueName: \"kubernetes.io/projected/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-kube-api-access-6mqzc\") pod \"dnsmasq-dns-b8fbc5445-2bt8n\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.323765 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-2bt8n\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.324140 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-2bt8n\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.324287 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-config\") pod \"dnsmasq-dns-b8fbc5445-2bt8n\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.324427 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-2bt8n\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.324555 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mqzc\" (UniqueName: \"kubernetes.io/projected/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-kube-api-access-6mqzc\") pod \"dnsmasq-dns-b8fbc5445-2bt8n\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.326237 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-2bt8n\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.326627 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-config\") pod \"dnsmasq-dns-b8fbc5445-2bt8n\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.327159 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-2bt8n\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.327724 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-2bt8n\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.355705 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mqzc\" (UniqueName: \"kubernetes.io/projected/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-kube-api-access-6mqzc\") pod \"dnsmasq-dns-b8fbc5445-2bt8n\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.529863 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-slv44"] Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.627832 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.759348 4765 generic.go:334] "Generic (PLEG): container finished" podID="64c10d89-0a3a-4106-b34e-ff8252758f2c" containerID="9d8f4041c937926a99cbe07a54477a15a1d91219fb87e79d6b8417069219140f" exitCode=0 Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.759432 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" event={"ID":"64c10d89-0a3a-4106-b34e-ff8252758f2c","Type":"ContainerDied","Data":"9d8f4041c937926a99cbe07a54477a15a1d91219fb87e79d6b8417069219140f"} Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.765540 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" event={"ID":"4d319df9-fb42-4085-8f96-4fd671ee4ac1","Type":"ContainerStarted","Data":"0dd08fee27001ec0ca3ce6a7ee38bce2d8cfc475498b4e557414d7d31e029694"} Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.765645 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-5wtns" podUID="d732c2fd-5d00-4ab9-8482-dc376f1924cb" containerName="dnsmasq-dns" containerID="cri-o://28ce6355dc82cff39d2b6c49e13f5caa4dbecebe7a60e0d3132cd2a20d4f9593" gracePeriod=10 Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.765916 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.792121 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.796691 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.806496 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.807868 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.812455 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.830968 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.831149 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.831254 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jc2f4" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.873813 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zmx6x"] Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.908319 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.977514 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbb9q\" (UniqueName: \"kubernetes.io/projected/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-kube-api-access-hbb9q\") pod \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.977650 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-config\") pod \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.977730 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-ovsdbserver-nb\") pod \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.977775 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-dns-svc\") pod \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\" (UID: \"73a1bfb6-fd7c-4dd0-8a8c-582977ed2571\") " Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.979493 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73a1bfb6-fd7c-4dd0-8a8c-582977ed2571" (UID: "73a1bfb6-fd7c-4dd0-8a8c-582977ed2571"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.979725 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-config" (OuterVolumeSpecName: "config") pod "73a1bfb6-fd7c-4dd0-8a8c-582977ed2571" (UID: "73a1bfb6-fd7c-4dd0-8a8c-582977ed2571"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.979934 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73a1bfb6-fd7c-4dd0-8a8c-582977ed2571" (UID: "73a1bfb6-fd7c-4dd0-8a8c-582977ed2571"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:33 crc kubenswrapper[4765]: I0121 13:21:33.987980 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-kube-api-access-hbb9q" (OuterVolumeSpecName: "kube-api-access-hbb9q") pod "73a1bfb6-fd7c-4dd0-8a8c-582977ed2571" (UID: "73a1bfb6-fd7c-4dd0-8a8c-582977ed2571"). InnerVolumeSpecName "kube-api-access-hbb9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.079512 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.080482 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p59vm\" (UniqueName: \"kubernetes.io/projected/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-kube-api-access-p59vm\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.080574 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.080613 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.080652 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-config\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.080682 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.080774 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-scripts\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.080813 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.080868 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.080886 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.080899 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbb9q\" (UniqueName: \"kubernetes.io/projected/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-kube-api-access-hbb9q\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.080910 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.110566 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.178473 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 21 13:21:34 crc kubenswrapper[4765]: E0121 13:21:34.179043 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c10d89-0a3a-4106-b34e-ff8252758f2c" containerName="dnsmasq-dns" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.179067 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c10d89-0a3a-4106-b34e-ff8252758f2c" containerName="dnsmasq-dns" Jan 21 13:21:34 crc kubenswrapper[4765]: E0121 13:21:34.179111 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c10d89-0a3a-4106-b34e-ff8252758f2c" containerName="init" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.179119 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c10d89-0a3a-4106-b34e-ff8252758f2c" containerName="init" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.179335 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c10d89-0a3a-4106-b34e-ff8252758f2c" containerName="dnsmasq-dns" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.183137 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c10d89-0a3a-4106-b34e-ff8252758f2c-config\") pod \"64c10d89-0a3a-4106-b34e-ff8252758f2c\" (UID: \"64c10d89-0a3a-4106-b34e-ff8252758f2c\") " Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.183249 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5pmn\" (UniqueName: \"kubernetes.io/projected/64c10d89-0a3a-4106-b34e-ff8252758f2c-kube-api-access-l5pmn\") pod \"64c10d89-0a3a-4106-b34e-ff8252758f2c\" (UID: \"64c10d89-0a3a-4106-b34e-ff8252758f2c\") " Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.183563 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64c10d89-0a3a-4106-b34e-ff8252758f2c-dns-svc\") pod \"64c10d89-0a3a-4106-b34e-ff8252758f2c\" (UID: \"64c10d89-0a3a-4106-b34e-ff8252758f2c\") " Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.183777 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p59vm\" (UniqueName: \"kubernetes.io/projected/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-kube-api-access-p59vm\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.183846 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.183882 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.183908 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-config\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.183932 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.183999 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-scripts\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.184026 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.184437 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.187955 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.195026 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-scripts\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.200185 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-vpgr2" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.200603 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.200889 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.203229 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-config\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.204167 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.214018 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c10d89-0a3a-4106-b34e-ff8252758f2c-kube-api-access-l5pmn" (OuterVolumeSpecName: "kube-api-access-l5pmn") pod "64c10d89-0a3a-4106-b34e-ff8252758f2c" (UID: "64c10d89-0a3a-4106-b34e-ff8252758f2c"). InnerVolumeSpecName "kube-api-access-l5pmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.214508 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.218720 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.225272 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.239152 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p59vm\" (UniqueName: \"kubernetes.io/projected/729e9cbc-22fc-4dea-a03d-5ebcd6c5f183-kube-api-access-p59vm\") pod \"ovn-northd-0\" (UID: \"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183\") " pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.246728 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.279009 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.287470 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.287520 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnnz2\" (UniqueName: \"kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-kube-api-access-vnnz2\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.287583 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/89b81f15-19f3-4dab-9b2d-fa41b2eab844-cache\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.287614 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/89b81f15-19f3-4dab-9b2d-fa41b2eab844-lock\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.287653 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.288403 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5pmn\" (UniqueName: \"kubernetes.io/projected/64c10d89-0a3a-4106-b34e-ff8252758f2c-kube-api-access-l5pmn\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.299714 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c10d89-0a3a-4106-b34e-ff8252758f2c-config" (OuterVolumeSpecName: "config") pod "64c10d89-0a3a-4106-b34e-ff8252758f2c" (UID: "64c10d89-0a3a-4106-b34e-ff8252758f2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.319005 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-j5v45"] Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.319987 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.320368 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64c10d89-0a3a-4106-b34e-ff8252758f2c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64c10d89-0a3a-4106-b34e-ff8252758f2c" (UID: "64c10d89-0a3a-4106-b34e-ff8252758f2c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.325555 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.325706 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.327376 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.392877 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwllw\" (UniqueName: \"kubernetes.io/projected/60abe159-7e5d-4586-9d1b-0050de42edbe-kube-api-access-gwllw\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.392958 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60abe159-7e5d-4586-9d1b-0050de42edbe-scripts\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.392987 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-combined-ca-bundle\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.393041 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60abe159-7e5d-4586-9d1b-0050de42edbe-ring-data-devices\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.393060 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60abe159-7e5d-4586-9d1b-0050de42edbe-etc-swift\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.393100 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-dispersionconf\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.393143 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.393183 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnnz2\" (UniqueName: \"kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-kube-api-access-vnnz2\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.393288 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-swiftconf\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.393498 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/89b81f15-19f3-4dab-9b2d-fa41b2eab844-cache\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.393542 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/89b81f15-19f3-4dab-9b2d-fa41b2eab844-lock\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.393573 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.393644 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64c10d89-0a3a-4106-b34e-ff8252758f2c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.393662 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c10d89-0a3a-4106-b34e-ff8252758f2c-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.394101 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.394440 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j5v45"] Jan 21 13:21:34 crc kubenswrapper[4765]: E0121 13:21:34.394595 4765 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 13:21:34 crc kubenswrapper[4765]: E0121 13:21:34.394613 4765 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 13:21:34 crc kubenswrapper[4765]: E0121 13:21:34.394657 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift podName:89b81f15-19f3-4dab-9b2d-fa41b2eab844 nodeName:}" failed. No retries permitted until 2026-01-21 13:21:34.894640422 +0000 UTC m=+1155.912366244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift") pod "swift-storage-0" (UID: "89b81f15-19f3-4dab-9b2d-fa41b2eab844") : configmap "swift-ring-files" not found Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.395439 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/89b81f15-19f3-4dab-9b2d-fa41b2eab844-cache\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.395684 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/89b81f15-19f3-4dab-9b2d-fa41b2eab844-lock\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.463072 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnnz2\" (UniqueName: \"kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-kube-api-access-vnnz2\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.496066 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.533159 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwllw\" (UniqueName: \"kubernetes.io/projected/60abe159-7e5d-4586-9d1b-0050de42edbe-kube-api-access-gwllw\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.533252 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60abe159-7e5d-4586-9d1b-0050de42edbe-scripts\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.533275 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-combined-ca-bundle\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.533340 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60abe159-7e5d-4586-9d1b-0050de42edbe-ring-data-devices\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.533521 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60abe159-7e5d-4586-9d1b-0050de42edbe-etc-swift\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.533553 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-dispersionconf\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.533700 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-swiftconf\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.534156 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60abe159-7e5d-4586-9d1b-0050de42edbe-scripts\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.545616 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60abe159-7e5d-4586-9d1b-0050de42edbe-etc-swift\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.546297 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60abe159-7e5d-4586-9d1b-0050de42edbe-ring-data-devices\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.563464 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-combined-ca-bundle\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.576788 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwllw\" (UniqueName: \"kubernetes.io/projected/60abe159-7e5d-4586-9d1b-0050de42edbe-kube-api-access-gwllw\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.576929 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-swiftconf\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.584804 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-2bt8n"] Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.598735 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-dispersionconf\") pod \"swift-ring-rebalance-j5v45\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.640878 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.695976 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5wtns" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.809664 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zmx6x" event={"ID":"2c7cc04a-963e-42e5-82ca-674e3e576a27","Type":"ContainerStarted","Data":"dcbcf5b396d28f13a2850f1336a501c89f92deac678b20addbd9f8920e1ec195"} Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.809745 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zmx6x" event={"ID":"2c7cc04a-963e-42e5-82ca-674e3e576a27","Type":"ContainerStarted","Data":"5f6e60b6aae7a1ed3f65763b7ea519ceeb201b216ee804c50539314940dec311"} Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.820496 4765 generic.go:334] "Generic (PLEG): container finished" podID="d732c2fd-5d00-4ab9-8482-dc376f1924cb" containerID="28ce6355dc82cff39d2b6c49e13f5caa4dbecebe7a60e0d3132cd2a20d4f9593" exitCode=0 Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.820586 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5wtns" event={"ID":"d732c2fd-5d00-4ab9-8482-dc376f1924cb","Type":"ContainerDied","Data":"28ce6355dc82cff39d2b6c49e13f5caa4dbecebe7a60e0d3132cd2a20d4f9593"} Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.820618 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5wtns" event={"ID":"d732c2fd-5d00-4ab9-8482-dc376f1924cb","Type":"ContainerDied","Data":"540aa33e1cc153fddf1f3974325d6d62daf3fbc97e2709502d25959bd7d34449"} Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.820635 4765 scope.go:117] "RemoveContainer" containerID="28ce6355dc82cff39d2b6c49e13f5caa4dbecebe7a60e0d3132cd2a20d4f9593" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.820827 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5wtns" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.843606 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-zmx6x" podStartSLOduration=2.843585307 podStartE2EDuration="2.843585307s" podCreationTimestamp="2026-01-21 13:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:21:34.835829019 +0000 UTC m=+1155.853554841" watchObservedRunningTime="2026-01-21 13:21:34.843585307 +0000 UTC m=+1155.861311129" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.865099 4765 generic.go:334] "Generic (PLEG): container finished" podID="4d319df9-fb42-4085-8f96-4fd671ee4ac1" containerID="21ca98e9119a0330c04f2542d8cd8b5a6cc10f1ebc18d0a1425a21a9f5212956" exitCode=0 Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.866116 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" event={"ID":"4d319df9-fb42-4085-8f96-4fd671ee4ac1","Type":"ContainerDied","Data":"21ca98e9119a0330c04f2542d8cd8b5a6cc10f1ebc18d0a1425a21a9f5212956"} Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.871534 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv94r\" (UniqueName: \"kubernetes.io/projected/d732c2fd-5d00-4ab9-8482-dc376f1924cb-kube-api-access-cv94r\") pod \"d732c2fd-5d00-4ab9-8482-dc376f1924cb\" (UID: \"d732c2fd-5d00-4ab9-8482-dc376f1924cb\") " Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.872000 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d732c2fd-5d00-4ab9-8482-dc376f1924cb-dns-svc\") pod \"d732c2fd-5d00-4ab9-8482-dc376f1924cb\" (UID: \"d732c2fd-5d00-4ab9-8482-dc376f1924cb\") " Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.872064 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d732c2fd-5d00-4ab9-8482-dc376f1924cb-config\") pod \"d732c2fd-5d00-4ab9-8482-dc376f1924cb\" (UID: \"d732c2fd-5d00-4ab9-8482-dc376f1924cb\") " Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.875020 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" event={"ID":"dbc3ab54-aeb7-4536-a7c9-30078f148ec5","Type":"ContainerStarted","Data":"678fa8aabb8ebb41101f22207e69d37d7f10ddfcad5db166aeeed2355a54e511"} Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.892443 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d732c2fd-5d00-4ab9-8482-dc376f1924cb-kube-api-access-cv94r" (OuterVolumeSpecName: "kube-api-access-cv94r") pod "d732c2fd-5d00-4ab9-8482-dc376f1924cb" (UID: "d732c2fd-5d00-4ab9-8482-dc376f1924cb"). InnerVolumeSpecName "kube-api-access-cv94r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.892693 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" event={"ID":"64c10d89-0a3a-4106-b34e-ff8252758f2c","Type":"ContainerDied","Data":"7e27cf707b47e5f0289584343409dcc9d3f52d68a60c561c788be967ded9492e"} Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.892839 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-sxfhw" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.896767 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-jbggl" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.926928 4765 scope.go:117] "RemoveContainer" containerID="36fea743941dfb32b06b76007e5e0dd40fa40ab9185ca9c7c4b9909bdb666568" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.945359 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d732c2fd-5d00-4ab9-8482-dc376f1924cb-config" (OuterVolumeSpecName: "config") pod "d732c2fd-5d00-4ab9-8482-dc376f1924cb" (UID: "d732c2fd-5d00-4ab9-8482-dc376f1924cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.981948 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.985529 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d732c2fd-5d00-4ab9-8482-dc376f1924cb-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:34 crc kubenswrapper[4765]: I0121 13:21:34.985549 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv94r\" (UniqueName: \"kubernetes.io/projected/d732c2fd-5d00-4ab9-8482-dc376f1924cb-kube-api-access-cv94r\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:34 crc kubenswrapper[4765]: E0121 13:21:34.988021 4765 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 13:21:34 crc kubenswrapper[4765]: E0121 13:21:34.988041 4765 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 13:21:34 crc kubenswrapper[4765]: E0121 13:21:34.988086 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift podName:89b81f15-19f3-4dab-9b2d-fa41b2eab844 nodeName:}" failed. No retries permitted until 2026-01-21 13:21:35.988066911 +0000 UTC m=+1157.005792793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift") pod "swift-storage-0" (UID: "89b81f15-19f3-4dab-9b2d-fa41b2eab844") : configmap "swift-ring-files" not found Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.012551 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d732c2fd-5d00-4ab9-8482-dc376f1924cb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d732c2fd-5d00-4ab9-8482-dc376f1924cb" (UID: "d732c2fd-5d00-4ab9-8482-dc376f1924cb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.013283 4765 scope.go:117] "RemoveContainer" containerID="28ce6355dc82cff39d2b6c49e13f5caa4dbecebe7a60e0d3132cd2a20d4f9593" Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.013441 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-jbggl"] Jan 21 13:21:35 crc kubenswrapper[4765]: E0121 13:21:35.013864 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28ce6355dc82cff39d2b6c49e13f5caa4dbecebe7a60e0d3132cd2a20d4f9593\": container with ID starting with 28ce6355dc82cff39d2b6c49e13f5caa4dbecebe7a60e0d3132cd2a20d4f9593 not found: ID does not exist" containerID="28ce6355dc82cff39d2b6c49e13f5caa4dbecebe7a60e0d3132cd2a20d4f9593" Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.013909 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ce6355dc82cff39d2b6c49e13f5caa4dbecebe7a60e0d3132cd2a20d4f9593"} err="failed to get container status \"28ce6355dc82cff39d2b6c49e13f5caa4dbecebe7a60e0d3132cd2a20d4f9593\": rpc error: code = NotFound desc = could not find container \"28ce6355dc82cff39d2b6c49e13f5caa4dbecebe7a60e0d3132cd2a20d4f9593\": container with ID starting with 28ce6355dc82cff39d2b6c49e13f5caa4dbecebe7a60e0d3132cd2a20d4f9593 not found: ID does not exist" Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.013941 4765 scope.go:117] "RemoveContainer" containerID="36fea743941dfb32b06b76007e5e0dd40fa40ab9185ca9c7c4b9909bdb666568" Jan 21 13:21:35 crc kubenswrapper[4765]: E0121 13:21:35.015779 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36fea743941dfb32b06b76007e5e0dd40fa40ab9185ca9c7c4b9909bdb666568\": container with ID starting with 36fea743941dfb32b06b76007e5e0dd40fa40ab9185ca9c7c4b9909bdb666568 not found: ID does not exist" containerID="36fea743941dfb32b06b76007e5e0dd40fa40ab9185ca9c7c4b9909bdb666568" Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.015804 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36fea743941dfb32b06b76007e5e0dd40fa40ab9185ca9c7c4b9909bdb666568"} err="failed to get container status \"36fea743941dfb32b06b76007e5e0dd40fa40ab9185ca9c7c4b9909bdb666568\": rpc error: code = NotFound desc = could not find container \"36fea743941dfb32b06b76007e5e0dd40fa40ab9185ca9c7c4b9909bdb666568\": container with ID starting with 36fea743941dfb32b06b76007e5e0dd40fa40ab9185ca9c7c4b9909bdb666568 not found: ID does not exist" Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.015819 4765 scope.go:117] "RemoveContainer" containerID="9d8f4041c937926a99cbe07a54477a15a1d91219fb87e79d6b8417069219140f" Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.024024 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-jbggl"] Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.043478 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sxfhw"] Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.065122 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sxfhw"] Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.075947 4765 scope.go:117] "RemoveContainer" containerID="cd1307237e1ca30def3b3c6d6584a9203ed9979625d371b4a8401c42ab2fce76" Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.087894 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d732c2fd-5d00-4ab9-8482-dc376f1924cb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.201081 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5wtns"] Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.226727 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5wtns"] Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.239542 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 13:21:35 crc kubenswrapper[4765]: W0121 13:21:35.269777 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod729e9cbc_22fc_4dea_a03d_5ebcd6c5f183.slice/crio-be3c3ea9681bf02e89e4104141ddd7abd16d4ac469ee0143ca2c94e146b1427c WatchSource:0}: Error finding container be3c3ea9681bf02e89e4104141ddd7abd16d4ac469ee0143ca2c94e146b1427c: Status 404 returned error can't find the container with id be3c3ea9681bf02e89e4104141ddd7abd16d4ac469ee0143ca2c94e146b1427c Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.379712 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j5v45"] Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.626587 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64c10d89-0a3a-4106-b34e-ff8252758f2c" path="/var/lib/kubelet/pods/64c10d89-0a3a-4106-b34e-ff8252758f2c/volumes" Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.627354 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73a1bfb6-fd7c-4dd0-8a8c-582977ed2571" path="/var/lib/kubelet/pods/73a1bfb6-fd7c-4dd0-8a8c-582977ed2571/volumes" Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.627680 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d732c2fd-5d00-4ab9-8482-dc376f1924cb" path="/var/lib/kubelet/pods/d732c2fd-5d00-4ab9-8482-dc376f1924cb/volumes" Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.910044 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j5v45" event={"ID":"60abe159-7e5d-4586-9d1b-0050de42edbe","Type":"ContainerStarted","Data":"f63f5d984044dcb02f8af93f0c021fb28cda9ddf5a74adb056ffe1051fc5499c"} Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.914172 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183","Type":"ContainerStarted","Data":"be3c3ea9681bf02e89e4104141ddd7abd16d4ac469ee0143ca2c94e146b1427c"} Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.917401 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" event={"ID":"4d319df9-fb42-4085-8f96-4fd671ee4ac1","Type":"ContainerStarted","Data":"c9adc1a2fee911ee8f9ffeb7d5635bb997f41fe2d4cb3f440c91fc1c69005823"} Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.917934 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.928924 4765 generic.go:334] "Generic (PLEG): container finished" podID="dbc3ab54-aeb7-4536-a7c9-30078f148ec5" containerID="ad2c682c9eb8a2f35d73a6fa79fd3f1426e419368e0235e5909f3d659d53644c" exitCode=0 Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.930378 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" event={"ID":"dbc3ab54-aeb7-4536-a7c9-30078f148ec5","Type":"ContainerDied","Data":"ad2c682c9eb8a2f35d73a6fa79fd3f1426e419368e0235e5909f3d659d53644c"} Jan 21 13:21:35 crc kubenswrapper[4765]: I0121 13:21:35.941010 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" podStartSLOduration=3.940990909 podStartE2EDuration="3.940990909s" podCreationTimestamp="2026-01-21 13:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:21:35.938064653 +0000 UTC m=+1156.955790475" watchObservedRunningTime="2026-01-21 13:21:35.940990909 +0000 UTC m=+1156.958716741" Jan 21 13:21:36 crc kubenswrapper[4765]: I0121 13:21:36.010149 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:36 crc kubenswrapper[4765]: E0121 13:21:36.010720 4765 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 13:21:36 crc kubenswrapper[4765]: E0121 13:21:36.010743 4765 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 13:21:36 crc kubenswrapper[4765]: E0121 13:21:36.010794 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift podName:89b81f15-19f3-4dab-9b2d-fa41b2eab844 nodeName:}" failed. No retries permitted until 2026-01-21 13:21:38.010774779 +0000 UTC m=+1159.028500641 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift") pod "swift-storage-0" (UID: "89b81f15-19f3-4dab-9b2d-fa41b2eab844") : configmap "swift-ring-files" not found Jan 21 13:21:36 crc kubenswrapper[4765]: I0121 13:21:36.937264 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 13:21:36 crc kubenswrapper[4765]: I0121 13:21:36.969397 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" event={"ID":"dbc3ab54-aeb7-4536-a7c9-30078f148ec5","Type":"ContainerStarted","Data":"723ece5d58ab11eb6af4757ddbaea824afe9040fdf90e2e933af82aebabd1de7"} Jan 21 13:21:36 crc kubenswrapper[4765]: I0121 13:21:36.973221 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.000178 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" podStartSLOduration=5.000160238 podStartE2EDuration="5.000160238s" podCreationTimestamp="2026-01-21 13:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:21:36.996854561 +0000 UTC m=+1158.014580393" watchObservedRunningTime="2026-01-21 13:21:37.000160238 +0000 UTC m=+1158.017886060" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.089535 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.511094 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-k4hl8"] Jan 21 13:21:37 crc kubenswrapper[4765]: E0121 13:21:37.511862 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d732c2fd-5d00-4ab9-8482-dc376f1924cb" containerName="dnsmasq-dns" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.511884 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d732c2fd-5d00-4ab9-8482-dc376f1924cb" containerName="dnsmasq-dns" Jan 21 13:21:37 crc kubenswrapper[4765]: E0121 13:21:37.511928 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d732c2fd-5d00-4ab9-8482-dc376f1924cb" containerName="init" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.511936 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d732c2fd-5d00-4ab9-8482-dc376f1924cb" containerName="init" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.512169 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d732c2fd-5d00-4ab9-8482-dc376f1924cb" containerName="dnsmasq-dns" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.512969 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k4hl8" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.516169 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.531705 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k4hl8"] Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.658857 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b69578e-71e3-432f-afca-edc58a98e777-operator-scripts\") pod \"root-account-create-update-k4hl8\" (UID: \"9b69578e-71e3-432f-afca-edc58a98e777\") " pod="openstack/root-account-create-update-k4hl8" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.658920 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdx59\" (UniqueName: \"kubernetes.io/projected/9b69578e-71e3-432f-afca-edc58a98e777-kube-api-access-hdx59\") pod \"root-account-create-update-k4hl8\" (UID: \"9b69578e-71e3-432f-afca-edc58a98e777\") " pod="openstack/root-account-create-update-k4hl8" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.760998 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b69578e-71e3-432f-afca-edc58a98e777-operator-scripts\") pod \"root-account-create-update-k4hl8\" (UID: \"9b69578e-71e3-432f-afca-edc58a98e777\") " pod="openstack/root-account-create-update-k4hl8" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.761073 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdx59\" (UniqueName: \"kubernetes.io/projected/9b69578e-71e3-432f-afca-edc58a98e777-kube-api-access-hdx59\") pod \"root-account-create-update-k4hl8\" (UID: \"9b69578e-71e3-432f-afca-edc58a98e777\") " pod="openstack/root-account-create-update-k4hl8" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.762718 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b69578e-71e3-432f-afca-edc58a98e777-operator-scripts\") pod \"root-account-create-update-k4hl8\" (UID: \"9b69578e-71e3-432f-afca-edc58a98e777\") " pod="openstack/root-account-create-update-k4hl8" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.780841 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdx59\" (UniqueName: \"kubernetes.io/projected/9b69578e-71e3-432f-afca-edc58a98e777-kube-api-access-hdx59\") pod \"root-account-create-update-k4hl8\" (UID: \"9b69578e-71e3-432f-afca-edc58a98e777\") " pod="openstack/root-account-create-update-k4hl8" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.834239 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k4hl8" Jan 21 13:21:37 crc kubenswrapper[4765]: I0121 13:21:37.986265 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183","Type":"ContainerStarted","Data":"ef342b5590a91106bcc0050d98b8f9f7835d507ca733319a9e21a9a029d56d3f"} Jan 21 13:21:38 crc kubenswrapper[4765]: I0121 13:21:38.067821 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:38 crc kubenswrapper[4765]: E0121 13:21:38.068600 4765 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 13:21:38 crc kubenswrapper[4765]: E0121 13:21:38.068638 4765 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 13:21:38 crc kubenswrapper[4765]: E0121 13:21:38.068711 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift podName:89b81f15-19f3-4dab-9b2d-fa41b2eab844 nodeName:}" failed. No retries permitted until 2026-01-21 13:21:42.068687622 +0000 UTC m=+1163.086413444 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift") pod "swift-storage-0" (UID: "89b81f15-19f3-4dab-9b2d-fa41b2eab844") : configmap "swift-ring-files" not found Jan 21 13:21:39 crc kubenswrapper[4765]: W0121 13:21:39.977412 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b69578e_71e3_432f_afca_edc58a98e777.slice/crio-fbce2da3c5c4c8a2b39a26a13c675a6eb6775d1023f0d240e31c2ad6148a124d WatchSource:0}: Error finding container fbce2da3c5c4c8a2b39a26a13c675a6eb6775d1023f0d240e31c2ad6148a124d: Status 404 returned error can't find the container with id fbce2da3c5c4c8a2b39a26a13c675a6eb6775d1023f0d240e31c2ad6148a124d Jan 21 13:21:39 crc kubenswrapper[4765]: I0121 13:21:39.978250 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k4hl8"] Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.006641 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k4hl8" event={"ID":"9b69578e-71e3-432f-afca-edc58a98e777","Type":"ContainerStarted","Data":"fbce2da3c5c4c8a2b39a26a13c675a6eb6775d1023f0d240e31c2ad6148a124d"} Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.008661 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j5v45" event={"ID":"60abe159-7e5d-4586-9d1b-0050de42edbe","Type":"ContainerStarted","Data":"0ce2d7fdf18b9a1854c96293b2d18c16945726e72e36f640673be66d4a0c9797"} Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.010582 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"729e9cbc-22fc-4dea-a03d-5ebcd6c5f183","Type":"ContainerStarted","Data":"e32cdf56869301f3d8ec829ab352cffe62d3b6b7c5bccaaafcfa86465b217b75"} Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.012733 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.037465 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-j5v45" podStartSLOduration=1.9093983319999999 podStartE2EDuration="6.037445046s" podCreationTimestamp="2026-01-21 13:21:34 +0000 UTC" firstStartedPulling="2026-01-21 13:21:35.389667316 +0000 UTC m=+1156.407393138" lastFinishedPulling="2026-01-21 13:21:39.51771403 +0000 UTC m=+1160.535439852" observedRunningTime="2026-01-21 13:21:40.031152321 +0000 UTC m=+1161.048878143" watchObservedRunningTime="2026-01-21 13:21:40.037445046 +0000 UTC m=+1161.055170888" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.068571 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.522761378 podStartE2EDuration="7.068553159s" podCreationTimestamp="2026-01-21 13:21:33 +0000 UTC" firstStartedPulling="2026-01-21 13:21:35.271784914 +0000 UTC m=+1156.289510736" lastFinishedPulling="2026-01-21 13:21:36.817576695 +0000 UTC m=+1157.835302517" observedRunningTime="2026-01-21 13:21:40.054716393 +0000 UTC m=+1161.072442225" watchObservedRunningTime="2026-01-21 13:21:40.068553159 +0000 UTC m=+1161.086278981" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.354773 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-252fs"] Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.359715 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-252fs" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.362905 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-252fs"] Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.395447 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c267-account-create-update-ptrxt"] Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.396528 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c267-account-create-update-ptrxt" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.399628 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.426264 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c267-account-create-update-ptrxt"] Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.514932 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc79a4c-772c-44ae-9b50-a3893d199b48-operator-scripts\") pod \"keystone-c267-account-create-update-ptrxt\" (UID: \"6cc79a4c-772c-44ae-9b50-a3893d199b48\") " pod="openstack/keystone-c267-account-create-update-ptrxt" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.515279 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqdcf\" (UniqueName: \"kubernetes.io/projected/6cc79a4c-772c-44ae-9b50-a3893d199b48-kube-api-access-zqdcf\") pod \"keystone-c267-account-create-update-ptrxt\" (UID: \"6cc79a4c-772c-44ae-9b50-a3893d199b48\") " pod="openstack/keystone-c267-account-create-update-ptrxt" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.515425 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88f4fc76-7416-4ac2-92b3-ef3649bbd6b1-operator-scripts\") pod \"keystone-db-create-252fs\" (UID: \"88f4fc76-7416-4ac2-92b3-ef3649bbd6b1\") " pod="openstack/keystone-db-create-252fs" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.515533 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p74ch\" (UniqueName: \"kubernetes.io/projected/88f4fc76-7416-4ac2-92b3-ef3649bbd6b1-kube-api-access-p74ch\") pod \"keystone-db-create-252fs\" (UID: \"88f4fc76-7416-4ac2-92b3-ef3649bbd6b1\") " pod="openstack/keystone-db-create-252fs" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.595038 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-lnvld"] Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.596450 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lnvld" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.614767 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lnvld"] Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.617656 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc79a4c-772c-44ae-9b50-a3893d199b48-operator-scripts\") pod \"keystone-c267-account-create-update-ptrxt\" (UID: \"6cc79a4c-772c-44ae-9b50-a3893d199b48\") " pod="openstack/keystone-c267-account-create-update-ptrxt" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.617698 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqdcf\" (UniqueName: \"kubernetes.io/projected/6cc79a4c-772c-44ae-9b50-a3893d199b48-kube-api-access-zqdcf\") pod \"keystone-c267-account-create-update-ptrxt\" (UID: \"6cc79a4c-772c-44ae-9b50-a3893d199b48\") " pod="openstack/keystone-c267-account-create-update-ptrxt" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.617732 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88f4fc76-7416-4ac2-92b3-ef3649bbd6b1-operator-scripts\") pod \"keystone-db-create-252fs\" (UID: \"88f4fc76-7416-4ac2-92b3-ef3649bbd6b1\") " pod="openstack/keystone-db-create-252fs" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.617750 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p74ch\" (UniqueName: \"kubernetes.io/projected/88f4fc76-7416-4ac2-92b3-ef3649bbd6b1-kube-api-access-p74ch\") pod \"keystone-db-create-252fs\" (UID: \"88f4fc76-7416-4ac2-92b3-ef3649bbd6b1\") " pod="openstack/keystone-db-create-252fs" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.618651 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc79a4c-772c-44ae-9b50-a3893d199b48-operator-scripts\") pod \"keystone-c267-account-create-update-ptrxt\" (UID: \"6cc79a4c-772c-44ae-9b50-a3893d199b48\") " pod="openstack/keystone-c267-account-create-update-ptrxt" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.620430 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88f4fc76-7416-4ac2-92b3-ef3649bbd6b1-operator-scripts\") pod \"keystone-db-create-252fs\" (UID: \"88f4fc76-7416-4ac2-92b3-ef3649bbd6b1\") " pod="openstack/keystone-db-create-252fs" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.654138 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p74ch\" (UniqueName: \"kubernetes.io/projected/88f4fc76-7416-4ac2-92b3-ef3649bbd6b1-kube-api-access-p74ch\") pod \"keystone-db-create-252fs\" (UID: \"88f4fc76-7416-4ac2-92b3-ef3649bbd6b1\") " pod="openstack/keystone-db-create-252fs" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.668293 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqdcf\" (UniqueName: \"kubernetes.io/projected/6cc79a4c-772c-44ae-9b50-a3893d199b48-kube-api-access-zqdcf\") pod \"keystone-c267-account-create-update-ptrxt\" (UID: \"6cc79a4c-772c-44ae-9b50-a3893d199b48\") " pod="openstack/keystone-c267-account-create-update-ptrxt" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.705705 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8873-account-create-update-pdsft"] Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.706765 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8873-account-create-update-pdsft" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.710984 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.719621 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qf97\" (UniqueName: \"kubernetes.io/projected/e0d0673a-0e41-46e0-ab22-19b6e9cb522a-kube-api-access-6qf97\") pod \"placement-db-create-lnvld\" (UID: \"e0d0673a-0e41-46e0-ab22-19b6e9cb522a\") " pod="openstack/placement-db-create-lnvld" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.719690 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0d0673a-0e41-46e0-ab22-19b6e9cb522a-operator-scripts\") pod \"placement-db-create-lnvld\" (UID: \"e0d0673a-0e41-46e0-ab22-19b6e9cb522a\") " pod="openstack/placement-db-create-lnvld" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.720384 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8873-account-create-update-pdsft"] Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.727772 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-252fs" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.735864 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c267-account-create-update-ptrxt" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.821726 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l24s\" (UniqueName: \"kubernetes.io/projected/c79ed78f-7aba-4980-b043-0850084ef3e8-kube-api-access-8l24s\") pod \"placement-8873-account-create-update-pdsft\" (UID: \"c79ed78f-7aba-4980-b043-0850084ef3e8\") " pod="openstack/placement-8873-account-create-update-pdsft" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.821812 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qf97\" (UniqueName: \"kubernetes.io/projected/e0d0673a-0e41-46e0-ab22-19b6e9cb522a-kube-api-access-6qf97\") pod \"placement-db-create-lnvld\" (UID: \"e0d0673a-0e41-46e0-ab22-19b6e9cb522a\") " pod="openstack/placement-db-create-lnvld" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.821884 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0d0673a-0e41-46e0-ab22-19b6e9cb522a-operator-scripts\") pod \"placement-db-create-lnvld\" (UID: \"e0d0673a-0e41-46e0-ab22-19b6e9cb522a\") " pod="openstack/placement-db-create-lnvld" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.821981 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c79ed78f-7aba-4980-b043-0850084ef3e8-operator-scripts\") pod \"placement-8873-account-create-update-pdsft\" (UID: \"c79ed78f-7aba-4980-b043-0850084ef3e8\") " pod="openstack/placement-8873-account-create-update-pdsft" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.822884 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0d0673a-0e41-46e0-ab22-19b6e9cb522a-operator-scripts\") pod \"placement-db-create-lnvld\" (UID: \"e0d0673a-0e41-46e0-ab22-19b6e9cb522a\") " pod="openstack/placement-db-create-lnvld" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.846039 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qf97\" (UniqueName: \"kubernetes.io/projected/e0d0673a-0e41-46e0-ab22-19b6e9cb522a-kube-api-access-6qf97\") pod \"placement-db-create-lnvld\" (UID: \"e0d0673a-0e41-46e0-ab22-19b6e9cb522a\") " pod="openstack/placement-db-create-lnvld" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.921141 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lnvld" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.922986 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l24s\" (UniqueName: \"kubernetes.io/projected/c79ed78f-7aba-4980-b043-0850084ef3e8-kube-api-access-8l24s\") pod \"placement-8873-account-create-update-pdsft\" (UID: \"c79ed78f-7aba-4980-b043-0850084ef3e8\") " pod="openstack/placement-8873-account-create-update-pdsft" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.923105 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c79ed78f-7aba-4980-b043-0850084ef3e8-operator-scripts\") pod \"placement-8873-account-create-update-pdsft\" (UID: \"c79ed78f-7aba-4980-b043-0850084ef3e8\") " pod="openstack/placement-8873-account-create-update-pdsft" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.923713 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c79ed78f-7aba-4980-b043-0850084ef3e8-operator-scripts\") pod \"placement-8873-account-create-update-pdsft\" (UID: \"c79ed78f-7aba-4980-b043-0850084ef3e8\") " pod="openstack/placement-8873-account-create-update-pdsft" Jan 21 13:21:40 crc kubenswrapper[4765]: I0121 13:21:40.946842 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l24s\" (UniqueName: \"kubernetes.io/projected/c79ed78f-7aba-4980-b043-0850084ef3e8-kube-api-access-8l24s\") pod \"placement-8873-account-create-update-pdsft\" (UID: \"c79ed78f-7aba-4980-b043-0850084ef3e8\") " pod="openstack/placement-8873-account-create-update-pdsft" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.028680 4765 generic.go:334] "Generic (PLEG): container finished" podID="9b69578e-71e3-432f-afca-edc58a98e777" containerID="dc35c6079cde6ca926273ed0c9597921be6ae633b635627d413e8d212906e8d1" exitCode=0 Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.029853 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k4hl8" event={"ID":"9b69578e-71e3-432f-afca-edc58a98e777","Type":"ContainerDied","Data":"dc35c6079cde6ca926273ed0c9597921be6ae633b635627d413e8d212906e8d1"} Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.096977 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-vm5v2"] Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.098870 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vm5v2" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.114807 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vm5v2"] Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.122805 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8873-account-create-update-pdsft" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.214273 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b8a1-account-create-update-4xk2c"] Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.215484 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b8a1-account-create-update-4xk2c" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.218119 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.228653 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b8a1-account-create-update-4xk2c"] Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.233480 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/246155f7-f2f2-4bb9-a1c3-640933aa45c6-operator-scripts\") pod \"glance-db-create-vm5v2\" (UID: \"246155f7-f2f2-4bb9-a1c3-640933aa45c6\") " pod="openstack/glance-db-create-vm5v2" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.233557 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghj7t\" (UniqueName: \"kubernetes.io/projected/246155f7-f2f2-4bb9-a1c3-640933aa45c6-kube-api-access-ghj7t\") pod \"glance-db-create-vm5v2\" (UID: \"246155f7-f2f2-4bb9-a1c3-640933aa45c6\") " pod="openstack/glance-db-create-vm5v2" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.312171 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-252fs"] Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.343384 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flrjc\" (UniqueName: \"kubernetes.io/projected/69eb1c3c-fc0b-48c1-8151-052e16dbf92e-kube-api-access-flrjc\") pod \"glance-b8a1-account-create-update-4xk2c\" (UID: \"69eb1c3c-fc0b-48c1-8151-052e16dbf92e\") " pod="openstack/glance-b8a1-account-create-update-4xk2c" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.343562 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/246155f7-f2f2-4bb9-a1c3-640933aa45c6-operator-scripts\") pod \"glance-db-create-vm5v2\" (UID: \"246155f7-f2f2-4bb9-a1c3-640933aa45c6\") " pod="openstack/glance-db-create-vm5v2" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.343658 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghj7t\" (UniqueName: \"kubernetes.io/projected/246155f7-f2f2-4bb9-a1c3-640933aa45c6-kube-api-access-ghj7t\") pod \"glance-db-create-vm5v2\" (UID: \"246155f7-f2f2-4bb9-a1c3-640933aa45c6\") " pod="openstack/glance-db-create-vm5v2" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.343898 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69eb1c3c-fc0b-48c1-8151-052e16dbf92e-operator-scripts\") pod \"glance-b8a1-account-create-update-4xk2c\" (UID: \"69eb1c3c-fc0b-48c1-8151-052e16dbf92e\") " pod="openstack/glance-b8a1-account-create-update-4xk2c" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.344807 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/246155f7-f2f2-4bb9-a1c3-640933aa45c6-operator-scripts\") pod \"glance-db-create-vm5v2\" (UID: \"246155f7-f2f2-4bb9-a1c3-640933aa45c6\") " pod="openstack/glance-db-create-vm5v2" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.345121 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c267-account-create-update-ptrxt"] Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.375891 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghj7t\" (UniqueName: \"kubernetes.io/projected/246155f7-f2f2-4bb9-a1c3-640933aa45c6-kube-api-access-ghj7t\") pod \"glance-db-create-vm5v2\" (UID: \"246155f7-f2f2-4bb9-a1c3-640933aa45c6\") " pod="openstack/glance-db-create-vm5v2" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.458709 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69eb1c3c-fc0b-48c1-8151-052e16dbf92e-operator-scripts\") pod \"glance-b8a1-account-create-update-4xk2c\" (UID: \"69eb1c3c-fc0b-48c1-8151-052e16dbf92e\") " pod="openstack/glance-b8a1-account-create-update-4xk2c" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.458779 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flrjc\" (UniqueName: \"kubernetes.io/projected/69eb1c3c-fc0b-48c1-8151-052e16dbf92e-kube-api-access-flrjc\") pod \"glance-b8a1-account-create-update-4xk2c\" (UID: \"69eb1c3c-fc0b-48c1-8151-052e16dbf92e\") " pod="openstack/glance-b8a1-account-create-update-4xk2c" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.460046 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69eb1c3c-fc0b-48c1-8151-052e16dbf92e-operator-scripts\") pod \"glance-b8a1-account-create-update-4xk2c\" (UID: \"69eb1c3c-fc0b-48c1-8151-052e16dbf92e\") " pod="openstack/glance-b8a1-account-create-update-4xk2c" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.475791 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vm5v2" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.486354 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flrjc\" (UniqueName: \"kubernetes.io/projected/69eb1c3c-fc0b-48c1-8151-052e16dbf92e-kube-api-access-flrjc\") pod \"glance-b8a1-account-create-update-4xk2c\" (UID: \"69eb1c3c-fc0b-48c1-8151-052e16dbf92e\") " pod="openstack/glance-b8a1-account-create-update-4xk2c" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.506198 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lnvld"] Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.536469 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b8a1-account-create-update-4xk2c" Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.680388 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8873-account-create-update-pdsft"] Jan 21 13:21:41 crc kubenswrapper[4765]: I0121 13:21:41.998481 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vm5v2"] Jan 21 13:21:42 crc kubenswrapper[4765]: W0121 13:21:42.001435 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod246155f7_f2f2_4bb9_a1c3_640933aa45c6.slice/crio-74fc7b673da22692a24ed99254e6b5cf6a4bc09492af933f6b6f02f192431e61 WatchSource:0}: Error finding container 74fc7b673da22692a24ed99254e6b5cf6a4bc09492af933f6b6f02f192431e61: Status 404 returned error can't find the container with id 74fc7b673da22692a24ed99254e6b5cf6a4bc09492af933f6b6f02f192431e61 Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.051984 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lnvld" event={"ID":"e0d0673a-0e41-46e0-ab22-19b6e9cb522a","Type":"ContainerStarted","Data":"7fc7c3dea81a557f7ceb2189e386c92f82e9daee36dfda8a843b15af5104eb08"} Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.052028 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lnvld" event={"ID":"e0d0673a-0e41-46e0-ab22-19b6e9cb522a","Type":"ContainerStarted","Data":"713219edceeae599317154d656865551bfa9749e93069b2236cbc83ee653d1fc"} Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.053574 4765 generic.go:334] "Generic (PLEG): container finished" podID="88f4fc76-7416-4ac2-92b3-ef3649bbd6b1" containerID="4c3791c73117b27994075d48e1bf97e583e9c9fc05b3e7943415fc23304ff092" exitCode=0 Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.053614 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-252fs" event={"ID":"88f4fc76-7416-4ac2-92b3-ef3649bbd6b1","Type":"ContainerDied","Data":"4c3791c73117b27994075d48e1bf97e583e9c9fc05b3e7943415fc23304ff092"} Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.053629 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-252fs" event={"ID":"88f4fc76-7416-4ac2-92b3-ef3649bbd6b1","Type":"ContainerStarted","Data":"034d4ee407153ad5355946ae7c2bb39d4fa572cb80c7822632f8a607c2de5389"} Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.056779 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8873-account-create-update-pdsft" event={"ID":"c79ed78f-7aba-4980-b043-0850084ef3e8","Type":"ContainerStarted","Data":"2e1bf7f9019dfd452e2d23adf2e26d23645f273a7c75cfd3880b3711ea351390"} Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.056804 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8873-account-create-update-pdsft" event={"ID":"c79ed78f-7aba-4980-b043-0850084ef3e8","Type":"ContainerStarted","Data":"c23db034073784d2fb934098b10148453068267dd011442fd784272507726120"} Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.058569 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c267-account-create-update-ptrxt" event={"ID":"6cc79a4c-772c-44ae-9b50-a3893d199b48","Type":"ContainerStarted","Data":"571898e2a863f5fb60fe91dbb6e313e258b379dffbc206b9b45eb650ae5be66e"} Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.058626 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c267-account-create-update-ptrxt" event={"ID":"6cc79a4c-772c-44ae-9b50-a3893d199b48","Type":"ContainerStarted","Data":"e4a4cc35d91c3f4028cf5f618448bd7d7cf737b2d0b4a85c7e93fbdb22d4a35c"} Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.062186 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vm5v2" event={"ID":"246155f7-f2f2-4bb9-a1c3-640933aa45c6","Type":"ContainerStarted","Data":"74fc7b673da22692a24ed99254e6b5cf6a4bc09492af933f6b6f02f192431e61"} Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.072145 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:42 crc kubenswrapper[4765]: E0121 13:21:42.072360 4765 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 13:21:42 crc kubenswrapper[4765]: E0121 13:21:42.072385 4765 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 13:21:42 crc kubenswrapper[4765]: E0121 13:21:42.072446 4765 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift podName:89b81f15-19f3-4dab-9b2d-fa41b2eab844 nodeName:}" failed. No retries permitted until 2026-01-21 13:21:50.072427274 +0000 UTC m=+1171.090153096 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift") pod "swift-storage-0" (UID: "89b81f15-19f3-4dab-9b2d-fa41b2eab844") : configmap "swift-ring-files" not found Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.077522 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-lnvld" podStartSLOduration=2.077501143 podStartE2EDuration="2.077501143s" podCreationTimestamp="2026-01-21 13:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:21:42.069381015 +0000 UTC m=+1163.087106837" watchObservedRunningTime="2026-01-21 13:21:42.077501143 +0000 UTC m=+1163.095226985" Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.092661 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c267-account-create-update-ptrxt" podStartSLOduration=2.092644898 podStartE2EDuration="2.092644898s" podCreationTimestamp="2026-01-21 13:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:21:42.086490747 +0000 UTC m=+1163.104216589" watchObservedRunningTime="2026-01-21 13:21:42.092644898 +0000 UTC m=+1163.110370720" Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.103825 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.141436 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8873-account-create-update-pdsft" podStartSLOduration=2.14140764 podStartE2EDuration="2.14140764s" podCreationTimestamp="2026-01-21 13:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:21:42.12438535 +0000 UTC m=+1163.142111172" watchObservedRunningTime="2026-01-21 13:21:42.14140764 +0000 UTC m=+1163.159133462" Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.224544 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b8a1-account-create-update-4xk2c"] Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.478388 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.687154 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k4hl8" Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.804251 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdx59\" (UniqueName: \"kubernetes.io/projected/9b69578e-71e3-432f-afca-edc58a98e777-kube-api-access-hdx59\") pod \"9b69578e-71e3-432f-afca-edc58a98e777\" (UID: \"9b69578e-71e3-432f-afca-edc58a98e777\") " Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.804358 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b69578e-71e3-432f-afca-edc58a98e777-operator-scripts\") pod \"9b69578e-71e3-432f-afca-edc58a98e777\" (UID: \"9b69578e-71e3-432f-afca-edc58a98e777\") " Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.804923 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b69578e-71e3-432f-afca-edc58a98e777-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9b69578e-71e3-432f-afca-edc58a98e777" (UID: "9b69578e-71e3-432f-afca-edc58a98e777"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.811136 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b69578e-71e3-432f-afca-edc58a98e777-kube-api-access-hdx59" (OuterVolumeSpecName: "kube-api-access-hdx59") pod "9b69578e-71e3-432f-afca-edc58a98e777" (UID: "9b69578e-71e3-432f-afca-edc58a98e777"). InnerVolumeSpecName "kube-api-access-hdx59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.907246 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdx59\" (UniqueName: \"kubernetes.io/projected/9b69578e-71e3-432f-afca-edc58a98e777-kube-api-access-hdx59\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:42 crc kubenswrapper[4765]: I0121 13:21:42.907311 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b69578e-71e3-432f-afca-edc58a98e777-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.072739 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k4hl8" event={"ID":"9b69578e-71e3-432f-afca-edc58a98e777","Type":"ContainerDied","Data":"fbce2da3c5c4c8a2b39a26a13c675a6eb6775d1023f0d240e31c2ad6148a124d"} Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.072783 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbce2da3c5c4c8a2b39a26a13c675a6eb6775d1023f0d240e31c2ad6148a124d" Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.072849 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k4hl8" Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.078487 4765 generic.go:334] "Generic (PLEG): container finished" podID="e0d0673a-0e41-46e0-ab22-19b6e9cb522a" containerID="7fc7c3dea81a557f7ceb2189e386c92f82e9daee36dfda8a843b15af5104eb08" exitCode=0 Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.078583 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lnvld" event={"ID":"e0d0673a-0e41-46e0-ab22-19b6e9cb522a","Type":"ContainerDied","Data":"7fc7c3dea81a557f7ceb2189e386c92f82e9daee36dfda8a843b15af5104eb08"} Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.080318 4765 generic.go:334] "Generic (PLEG): container finished" podID="c79ed78f-7aba-4980-b043-0850084ef3e8" containerID="2e1bf7f9019dfd452e2d23adf2e26d23645f273a7c75cfd3880b3711ea351390" exitCode=0 Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.080365 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8873-account-create-update-pdsft" event={"ID":"c79ed78f-7aba-4980-b043-0850084ef3e8","Type":"ContainerDied","Data":"2e1bf7f9019dfd452e2d23adf2e26d23645f273a7c75cfd3880b3711ea351390"} Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.083197 4765 generic.go:334] "Generic (PLEG): container finished" podID="69eb1c3c-fc0b-48c1-8151-052e16dbf92e" containerID="5b90fa3b9fae42d7931aa71bf148ac1e58295958d5083b20ad8e9c4b80d234e7" exitCode=0 Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.083310 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b8a1-account-create-update-4xk2c" event={"ID":"69eb1c3c-fc0b-48c1-8151-052e16dbf92e","Type":"ContainerDied","Data":"5b90fa3b9fae42d7931aa71bf148ac1e58295958d5083b20ad8e9c4b80d234e7"} Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.083363 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b8a1-account-create-update-4xk2c" event={"ID":"69eb1c3c-fc0b-48c1-8151-052e16dbf92e","Type":"ContainerStarted","Data":"e073e72ea07635fdd8a2bc3d9518c709c606fec95042a3f4e332aedca76c43c4"} Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.088701 4765 generic.go:334] "Generic (PLEG): container finished" podID="6cc79a4c-772c-44ae-9b50-a3893d199b48" containerID="571898e2a863f5fb60fe91dbb6e313e258b379dffbc206b9b45eb650ae5be66e" exitCode=0 Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.088763 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c267-account-create-update-ptrxt" event={"ID":"6cc79a4c-772c-44ae-9b50-a3893d199b48","Type":"ContainerDied","Data":"571898e2a863f5fb60fe91dbb6e313e258b379dffbc206b9b45eb650ae5be66e"} Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.094894 4765 generic.go:334] "Generic (PLEG): container finished" podID="246155f7-f2f2-4bb9-a1c3-640933aa45c6" containerID="230fbe4ed5fb02480a0bd6797e535ddc1b0206bb7db062fa88bae0232e4c83a8" exitCode=0 Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.095188 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vm5v2" event={"ID":"246155f7-f2f2-4bb9-a1c3-640933aa45c6","Type":"ContainerDied","Data":"230fbe4ed5fb02480a0bd6797e535ddc1b0206bb7db062fa88bae0232e4c83a8"} Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.425187 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-252fs" Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.622726 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p74ch\" (UniqueName: \"kubernetes.io/projected/88f4fc76-7416-4ac2-92b3-ef3649bbd6b1-kube-api-access-p74ch\") pod \"88f4fc76-7416-4ac2-92b3-ef3649bbd6b1\" (UID: \"88f4fc76-7416-4ac2-92b3-ef3649bbd6b1\") " Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.622791 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88f4fc76-7416-4ac2-92b3-ef3649bbd6b1-operator-scripts\") pod \"88f4fc76-7416-4ac2-92b3-ef3649bbd6b1\" (UID: \"88f4fc76-7416-4ac2-92b3-ef3649bbd6b1\") " Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.623980 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88f4fc76-7416-4ac2-92b3-ef3649bbd6b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88f4fc76-7416-4ac2-92b3-ef3649bbd6b1" (UID: "88f4fc76-7416-4ac2-92b3-ef3649bbd6b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.630349 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.639728 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88f4fc76-7416-4ac2-92b3-ef3649bbd6b1-kube-api-access-p74ch" (OuterVolumeSpecName: "kube-api-access-p74ch") pod "88f4fc76-7416-4ac2-92b3-ef3649bbd6b1" (UID: "88f4fc76-7416-4ac2-92b3-ef3649bbd6b1"). InnerVolumeSpecName "kube-api-access-p74ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.696938 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-slv44"] Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.697473 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" podUID="4d319df9-fb42-4085-8f96-4fd671ee4ac1" containerName="dnsmasq-dns" containerID="cri-o://c9adc1a2fee911ee8f9ffeb7d5635bb997f41fe2d4cb3f440c91fc1c69005823" gracePeriod=10 Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.727312 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p74ch\" (UniqueName: \"kubernetes.io/projected/88f4fc76-7416-4ac2-92b3-ef3649bbd6b1-kube-api-access-p74ch\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.727363 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88f4fc76-7416-4ac2-92b3-ef3649bbd6b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.806873 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-k4hl8"] Jan 21 13:21:43 crc kubenswrapper[4765]: I0121 13:21:43.838620 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-k4hl8"] Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.109569 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-252fs" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.109561 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-252fs" event={"ID":"88f4fc76-7416-4ac2-92b3-ef3649bbd6b1","Type":"ContainerDied","Data":"034d4ee407153ad5355946ae7c2bb39d4fa572cb80c7822632f8a607c2de5389"} Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.110479 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="034d4ee407153ad5355946ae7c2bb39d4fa572cb80c7822632f8a607c2de5389" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.116000 4765 generic.go:334] "Generic (PLEG): container finished" podID="4d319df9-fb42-4085-8f96-4fd671ee4ac1" containerID="c9adc1a2fee911ee8f9ffeb7d5635bb997f41fe2d4cb3f440c91fc1c69005823" exitCode=0 Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.116047 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" event={"ID":"4d319df9-fb42-4085-8f96-4fd671ee4ac1","Type":"ContainerDied","Data":"c9adc1a2fee911ee8f9ffeb7d5635bb997f41fe2d4cb3f440c91fc1c69005823"} Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.116080 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" event={"ID":"4d319df9-fb42-4085-8f96-4fd671ee4ac1","Type":"ContainerDied","Data":"0dd08fee27001ec0ca3ce6a7ee38bce2d8cfc475498b4e557414d7d31e029694"} Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.116101 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dd08fee27001ec0ca3ce6a7ee38bce2d8cfc475498b4e557414d7d31e029694" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.165678 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.341793 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hjmj\" (UniqueName: \"kubernetes.io/projected/4d319df9-fb42-4085-8f96-4fd671ee4ac1-kube-api-access-9hjmj\") pod \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.341981 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-ovsdbserver-nb\") pod \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.342028 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-dns-svc\") pod \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.342045 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-config\") pod \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\" (UID: \"4d319df9-fb42-4085-8f96-4fd671ee4ac1\") " Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.347489 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d319df9-fb42-4085-8f96-4fd671ee4ac1-kube-api-access-9hjmj" (OuterVolumeSpecName: "kube-api-access-9hjmj") pod "4d319df9-fb42-4085-8f96-4fd671ee4ac1" (UID: "4d319df9-fb42-4085-8f96-4fd671ee4ac1"). InnerVolumeSpecName "kube-api-access-9hjmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.399399 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d319df9-fb42-4085-8f96-4fd671ee4ac1" (UID: "4d319df9-fb42-4085-8f96-4fd671ee4ac1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.422833 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-config" (OuterVolumeSpecName: "config") pod "4d319df9-fb42-4085-8f96-4fd671ee4ac1" (UID: "4d319df9-fb42-4085-8f96-4fd671ee4ac1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.426842 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4d319df9-fb42-4085-8f96-4fd671ee4ac1" (UID: "4d319df9-fb42-4085-8f96-4fd671ee4ac1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.444988 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.445040 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.445048 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d319df9-fb42-4085-8f96-4fd671ee4ac1-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.445057 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hjmj\" (UniqueName: \"kubernetes.io/projected/4d319df9-fb42-4085-8f96-4fd671ee4ac1-kube-api-access-9hjmj\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.445564 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.445618 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.542402 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b8a1-account-create-update-4xk2c" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.547616 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69eb1c3c-fc0b-48c1-8151-052e16dbf92e-operator-scripts\") pod \"69eb1c3c-fc0b-48c1-8151-052e16dbf92e\" (UID: \"69eb1c3c-fc0b-48c1-8151-052e16dbf92e\") " Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.547824 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flrjc\" (UniqueName: \"kubernetes.io/projected/69eb1c3c-fc0b-48c1-8151-052e16dbf92e-kube-api-access-flrjc\") pod \"69eb1c3c-fc0b-48c1-8151-052e16dbf92e\" (UID: \"69eb1c3c-fc0b-48c1-8151-052e16dbf92e\") " Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.548298 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69eb1c3c-fc0b-48c1-8151-052e16dbf92e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69eb1c3c-fc0b-48c1-8151-052e16dbf92e" (UID: "69eb1c3c-fc0b-48c1-8151-052e16dbf92e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.551971 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69eb1c3c-fc0b-48c1-8151-052e16dbf92e-kube-api-access-flrjc" (OuterVolumeSpecName: "kube-api-access-flrjc") pod "69eb1c3c-fc0b-48c1-8151-052e16dbf92e" (UID: "69eb1c3c-fc0b-48c1-8151-052e16dbf92e"). InnerVolumeSpecName "kube-api-access-flrjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.651340 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flrjc\" (UniqueName: \"kubernetes.io/projected/69eb1c3c-fc0b-48c1-8151-052e16dbf92e-kube-api-access-flrjc\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.651395 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69eb1c3c-fc0b-48c1-8151-052e16dbf92e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.858131 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lnvld" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.868112 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c267-account-create-update-ptrxt" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.880813 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8873-account-create-update-pdsft" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.886441 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vm5v2" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.959880 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c79ed78f-7aba-4980-b043-0850084ef3e8-operator-scripts\") pod \"c79ed78f-7aba-4980-b043-0850084ef3e8\" (UID: \"c79ed78f-7aba-4980-b043-0850084ef3e8\") " Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.960243 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0d0673a-0e41-46e0-ab22-19b6e9cb522a-operator-scripts\") pod \"e0d0673a-0e41-46e0-ab22-19b6e9cb522a\" (UID: \"e0d0673a-0e41-46e0-ab22-19b6e9cb522a\") " Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.960375 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc79a4c-772c-44ae-9b50-a3893d199b48-operator-scripts\") pod \"6cc79a4c-772c-44ae-9b50-a3893d199b48\" (UID: \"6cc79a4c-772c-44ae-9b50-a3893d199b48\") " Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.960480 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/246155f7-f2f2-4bb9-a1c3-640933aa45c6-operator-scripts\") pod \"246155f7-f2f2-4bb9-a1c3-640933aa45c6\" (UID: \"246155f7-f2f2-4bb9-a1c3-640933aa45c6\") " Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.960485 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c79ed78f-7aba-4980-b043-0850084ef3e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c79ed78f-7aba-4980-b043-0850084ef3e8" (UID: "c79ed78f-7aba-4980-b043-0850084ef3e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.960772 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqdcf\" (UniqueName: \"kubernetes.io/projected/6cc79a4c-772c-44ae-9b50-a3893d199b48-kube-api-access-zqdcf\") pod \"6cc79a4c-772c-44ae-9b50-a3893d199b48\" (UID: \"6cc79a4c-772c-44ae-9b50-a3893d199b48\") " Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.961331 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qf97\" (UniqueName: \"kubernetes.io/projected/e0d0673a-0e41-46e0-ab22-19b6e9cb522a-kube-api-access-6qf97\") pod \"e0d0673a-0e41-46e0-ab22-19b6e9cb522a\" (UID: \"e0d0673a-0e41-46e0-ab22-19b6e9cb522a\") " Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.961902 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghj7t\" (UniqueName: \"kubernetes.io/projected/246155f7-f2f2-4bb9-a1c3-640933aa45c6-kube-api-access-ghj7t\") pod \"246155f7-f2f2-4bb9-a1c3-640933aa45c6\" (UID: \"246155f7-f2f2-4bb9-a1c3-640933aa45c6\") " Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.961997 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l24s\" (UniqueName: \"kubernetes.io/projected/c79ed78f-7aba-4980-b043-0850084ef3e8-kube-api-access-8l24s\") pod \"c79ed78f-7aba-4980-b043-0850084ef3e8\" (UID: \"c79ed78f-7aba-4980-b043-0850084ef3e8\") " Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.960876 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/246155f7-f2f2-4bb9-a1c3-640933aa45c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "246155f7-f2f2-4bb9-a1c3-640933aa45c6" (UID: "246155f7-f2f2-4bb9-a1c3-640933aa45c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.960897 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc79a4c-772c-44ae-9b50-a3893d199b48-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6cc79a4c-772c-44ae-9b50-a3893d199b48" (UID: "6cc79a4c-772c-44ae-9b50-a3893d199b48"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.960889 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0d0673a-0e41-46e0-ab22-19b6e9cb522a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0d0673a-0e41-46e0-ab22-19b6e9cb522a" (UID: "e0d0673a-0e41-46e0-ab22-19b6e9cb522a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.962697 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c79ed78f-7aba-4980-b043-0850084ef3e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.962774 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0d0673a-0e41-46e0-ab22-19b6e9cb522a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.962830 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cc79a4c-772c-44ae-9b50-a3893d199b48-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.962892 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/246155f7-f2f2-4bb9-a1c3-640933aa45c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.966319 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d0673a-0e41-46e0-ab22-19b6e9cb522a-kube-api-access-6qf97" (OuterVolumeSpecName: "kube-api-access-6qf97") pod "e0d0673a-0e41-46e0-ab22-19b6e9cb522a" (UID: "e0d0673a-0e41-46e0-ab22-19b6e9cb522a"). InnerVolumeSpecName "kube-api-access-6qf97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.968861 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/246155f7-f2f2-4bb9-a1c3-640933aa45c6-kube-api-access-ghj7t" (OuterVolumeSpecName: "kube-api-access-ghj7t") pod "246155f7-f2f2-4bb9-a1c3-640933aa45c6" (UID: "246155f7-f2f2-4bb9-a1c3-640933aa45c6"). InnerVolumeSpecName "kube-api-access-ghj7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.969095 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc79a4c-772c-44ae-9b50-a3893d199b48-kube-api-access-zqdcf" (OuterVolumeSpecName: "kube-api-access-zqdcf") pod "6cc79a4c-772c-44ae-9b50-a3893d199b48" (UID: "6cc79a4c-772c-44ae-9b50-a3893d199b48"). InnerVolumeSpecName "kube-api-access-zqdcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:44 crc kubenswrapper[4765]: I0121 13:21:44.971831 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79ed78f-7aba-4980-b043-0850084ef3e8-kube-api-access-8l24s" (OuterVolumeSpecName: "kube-api-access-8l24s") pod "c79ed78f-7aba-4980-b043-0850084ef3e8" (UID: "c79ed78f-7aba-4980-b043-0850084ef3e8"). InnerVolumeSpecName "kube-api-access-8l24s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.064337 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqdcf\" (UniqueName: \"kubernetes.io/projected/6cc79a4c-772c-44ae-9b50-a3893d199b48-kube-api-access-zqdcf\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.064383 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qf97\" (UniqueName: \"kubernetes.io/projected/e0d0673a-0e41-46e0-ab22-19b6e9cb522a-kube-api-access-6qf97\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.064403 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghj7t\" (UniqueName: \"kubernetes.io/projected/246155f7-f2f2-4bb9-a1c3-640933aa45c6-kube-api-access-ghj7t\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.064418 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l24s\" (UniqueName: \"kubernetes.io/projected/c79ed78f-7aba-4980-b043-0850084ef3e8-kube-api-access-8l24s\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.126373 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lnvld" event={"ID":"e0d0673a-0e41-46e0-ab22-19b6e9cb522a","Type":"ContainerDied","Data":"713219edceeae599317154d656865551bfa9749e93069b2236cbc83ee653d1fc"} Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.126416 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="713219edceeae599317154d656865551bfa9749e93069b2236cbc83ee653d1fc" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.126480 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lnvld" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.133799 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8873-account-create-update-pdsft" event={"ID":"c79ed78f-7aba-4980-b043-0850084ef3e8","Type":"ContainerDied","Data":"c23db034073784d2fb934098b10148453068267dd011442fd784272507726120"} Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.134924 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c23db034073784d2fb934098b10148453068267dd011442fd784272507726120" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.133847 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8873-account-create-update-pdsft" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.135600 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b8a1-account-create-update-4xk2c" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.135652 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b8a1-account-create-update-4xk2c" event={"ID":"69eb1c3c-fc0b-48c1-8151-052e16dbf92e","Type":"ContainerDied","Data":"e073e72ea07635fdd8a2bc3d9518c709c606fec95042a3f4e332aedca76c43c4"} Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.135688 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e073e72ea07635fdd8a2bc3d9518c709c606fec95042a3f4e332aedca76c43c4" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.136891 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c267-account-create-update-ptrxt" event={"ID":"6cc79a4c-772c-44ae-9b50-a3893d199b48","Type":"ContainerDied","Data":"e4a4cc35d91c3f4028cf5f618448bd7d7cf737b2d0b4a85c7e93fbdb22d4a35c"} Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.136932 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4a4cc35d91c3f4028cf5f618448bd7d7cf737b2d0b4a85c7e93fbdb22d4a35c" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.136992 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c267-account-create-update-ptrxt" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.138923 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vm5v2" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.138941 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vm5v2" event={"ID":"246155f7-f2f2-4bb9-a1c3-640933aa45c6","Type":"ContainerDied","Data":"74fc7b673da22692a24ed99254e6b5cf6a4bc09492af933f6b6f02f192431e61"} Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.138986 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74fc7b673da22692a24ed99254e6b5cf6a4bc09492af933f6b6f02f192431e61" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.138924 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-slv44" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.196810 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-slv44"] Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.219832 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-slv44"] Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.624681 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d319df9-fb42-4085-8f96-4fd671ee4ac1" path="/var/lib/kubelet/pods/4d319df9-fb42-4085-8f96-4fd671ee4ac1/volumes" Jan 21 13:21:45 crc kubenswrapper[4765]: I0121 13:21:45.625697 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b69578e-71e3-432f-afca-edc58a98e777" path="/var/lib/kubelet/pods/9b69578e-71e3-432f-afca-edc58a98e777/volumes" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.368313 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6h2b4"] Jan 21 13:21:46 crc kubenswrapper[4765]: E0121 13:21:46.369662 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d319df9-fb42-4085-8f96-4fd671ee4ac1" containerName="init" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.369682 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d319df9-fb42-4085-8f96-4fd671ee4ac1" containerName="init" Jan 21 13:21:46 crc kubenswrapper[4765]: E0121 13:21:46.369696 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f4fc76-7416-4ac2-92b3-ef3649bbd6b1" containerName="mariadb-database-create" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.369701 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f4fc76-7416-4ac2-92b3-ef3649bbd6b1" containerName="mariadb-database-create" Jan 21 13:21:46 crc kubenswrapper[4765]: E0121 13:21:46.369719 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d319df9-fb42-4085-8f96-4fd671ee4ac1" containerName="dnsmasq-dns" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.369726 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d319df9-fb42-4085-8f96-4fd671ee4ac1" containerName="dnsmasq-dns" Jan 21 13:21:46 crc kubenswrapper[4765]: E0121 13:21:46.369740 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b69578e-71e3-432f-afca-edc58a98e777" containerName="mariadb-account-create-update" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.369746 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b69578e-71e3-432f-afca-edc58a98e777" containerName="mariadb-account-create-update" Jan 21 13:21:46 crc kubenswrapper[4765]: E0121 13:21:46.369764 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="246155f7-f2f2-4bb9-a1c3-640933aa45c6" containerName="mariadb-database-create" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.369769 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="246155f7-f2f2-4bb9-a1c3-640933aa45c6" containerName="mariadb-database-create" Jan 21 13:21:46 crc kubenswrapper[4765]: E0121 13:21:46.369780 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79ed78f-7aba-4980-b043-0850084ef3e8" containerName="mariadb-account-create-update" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.369785 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79ed78f-7aba-4980-b043-0850084ef3e8" containerName="mariadb-account-create-update" Jan 21 13:21:46 crc kubenswrapper[4765]: E0121 13:21:46.369793 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d0673a-0e41-46e0-ab22-19b6e9cb522a" containerName="mariadb-database-create" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.369799 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d0673a-0e41-46e0-ab22-19b6e9cb522a" containerName="mariadb-database-create" Jan 21 13:21:46 crc kubenswrapper[4765]: E0121 13:21:46.369807 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69eb1c3c-fc0b-48c1-8151-052e16dbf92e" containerName="mariadb-account-create-update" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.369813 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="69eb1c3c-fc0b-48c1-8151-052e16dbf92e" containerName="mariadb-account-create-update" Jan 21 13:21:46 crc kubenswrapper[4765]: E0121 13:21:46.369825 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc79a4c-772c-44ae-9b50-a3893d199b48" containerName="mariadb-account-create-update" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.369832 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc79a4c-772c-44ae-9b50-a3893d199b48" containerName="mariadb-account-create-update" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.369991 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f4fc76-7416-4ac2-92b3-ef3649bbd6b1" containerName="mariadb-database-create" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.370012 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d0673a-0e41-46e0-ab22-19b6e9cb522a" containerName="mariadb-database-create" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.370021 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d319df9-fb42-4085-8f96-4fd671ee4ac1" containerName="dnsmasq-dns" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.370030 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b69578e-71e3-432f-afca-edc58a98e777" containerName="mariadb-account-create-update" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.370042 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="246155f7-f2f2-4bb9-a1c3-640933aa45c6" containerName="mariadb-database-create" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.370053 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="69eb1c3c-fc0b-48c1-8151-052e16dbf92e" containerName="mariadb-account-create-update" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.370063 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc79a4c-772c-44ae-9b50-a3893d199b48" containerName="mariadb-account-create-update" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.370071 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79ed78f-7aba-4980-b043-0850084ef3e8" containerName="mariadb-account-create-update" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.370784 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6h2b4" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.377893 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.378150 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8hh29" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.386942 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-config-data\") pod \"glance-db-sync-6h2b4\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " pod="openstack/glance-db-sync-6h2b4" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.387079 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn8k4\" (UniqueName: \"kubernetes.io/projected/a300493b-663b-4b7e-b2b7-890abcca42dd-kube-api-access-vn8k4\") pod \"glance-db-sync-6h2b4\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " pod="openstack/glance-db-sync-6h2b4" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.387166 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-db-sync-config-data\") pod \"glance-db-sync-6h2b4\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " pod="openstack/glance-db-sync-6h2b4" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.387287 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-combined-ca-bundle\") pod \"glance-db-sync-6h2b4\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " pod="openstack/glance-db-sync-6h2b4" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.402398 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6h2b4"] Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.488632 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-db-sync-config-data\") pod \"glance-db-sync-6h2b4\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " pod="openstack/glance-db-sync-6h2b4" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.488722 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-combined-ca-bundle\") pod \"glance-db-sync-6h2b4\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " pod="openstack/glance-db-sync-6h2b4" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.488778 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-config-data\") pod \"glance-db-sync-6h2b4\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " pod="openstack/glance-db-sync-6h2b4" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.488849 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn8k4\" (UniqueName: \"kubernetes.io/projected/a300493b-663b-4b7e-b2b7-890abcca42dd-kube-api-access-vn8k4\") pod \"glance-db-sync-6h2b4\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " pod="openstack/glance-db-sync-6h2b4" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.494455 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-config-data\") pod \"glance-db-sync-6h2b4\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " pod="openstack/glance-db-sync-6h2b4" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.501157 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-combined-ca-bundle\") pod \"glance-db-sync-6h2b4\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " pod="openstack/glance-db-sync-6h2b4" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.503605 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-db-sync-config-data\") pod \"glance-db-sync-6h2b4\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " pod="openstack/glance-db-sync-6h2b4" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.505751 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn8k4\" (UniqueName: \"kubernetes.io/projected/a300493b-663b-4b7e-b2b7-890abcca42dd-kube-api-access-vn8k4\") pod \"glance-db-sync-6h2b4\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " pod="openstack/glance-db-sync-6h2b4" Jan 21 13:21:46 crc kubenswrapper[4765]: I0121 13:21:46.694713 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6h2b4" Jan 21 13:21:47 crc kubenswrapper[4765]: I0121 13:21:47.248018 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6h2b4"] Jan 21 13:21:47 crc kubenswrapper[4765]: I0121 13:21:47.540160 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-r4f5t"] Jan 21 13:21:47 crc kubenswrapper[4765]: I0121 13:21:47.541597 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r4f5t" Jan 21 13:21:47 crc kubenswrapper[4765]: I0121 13:21:47.544038 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 13:21:47 crc kubenswrapper[4765]: I0121 13:21:47.559022 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r4f5t"] Jan 21 13:21:47 crc kubenswrapper[4765]: I0121 13:21:47.614157 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b8c79c-6093-496c-86e2-fe9dafebe84a-operator-scripts\") pod \"root-account-create-update-r4f5t\" (UID: \"b7b8c79c-6093-496c-86e2-fe9dafebe84a\") " pod="openstack/root-account-create-update-r4f5t" Jan 21 13:21:47 crc kubenswrapper[4765]: I0121 13:21:47.614263 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmm7z\" (UniqueName: \"kubernetes.io/projected/b7b8c79c-6093-496c-86e2-fe9dafebe84a-kube-api-access-jmm7z\") pod \"root-account-create-update-r4f5t\" (UID: \"b7b8c79c-6093-496c-86e2-fe9dafebe84a\") " pod="openstack/root-account-create-update-r4f5t" Jan 21 13:21:47 crc kubenswrapper[4765]: I0121 13:21:47.715795 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmm7z\" (UniqueName: \"kubernetes.io/projected/b7b8c79c-6093-496c-86e2-fe9dafebe84a-kube-api-access-jmm7z\") pod \"root-account-create-update-r4f5t\" (UID: \"b7b8c79c-6093-496c-86e2-fe9dafebe84a\") " pod="openstack/root-account-create-update-r4f5t" Jan 21 13:21:47 crc kubenswrapper[4765]: I0121 13:21:47.717265 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b8c79c-6093-496c-86e2-fe9dafebe84a-operator-scripts\") pod \"root-account-create-update-r4f5t\" (UID: \"b7b8c79c-6093-496c-86e2-fe9dafebe84a\") " pod="openstack/root-account-create-update-r4f5t" Jan 21 13:21:47 crc kubenswrapper[4765]: I0121 13:21:47.718795 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b8c79c-6093-496c-86e2-fe9dafebe84a-operator-scripts\") pod \"root-account-create-update-r4f5t\" (UID: \"b7b8c79c-6093-496c-86e2-fe9dafebe84a\") " pod="openstack/root-account-create-update-r4f5t" Jan 21 13:21:47 crc kubenswrapper[4765]: I0121 13:21:47.745580 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmm7z\" (UniqueName: \"kubernetes.io/projected/b7b8c79c-6093-496c-86e2-fe9dafebe84a-kube-api-access-jmm7z\") pod \"root-account-create-update-r4f5t\" (UID: \"b7b8c79c-6093-496c-86e2-fe9dafebe84a\") " pod="openstack/root-account-create-update-r4f5t" Jan 21 13:21:47 crc kubenswrapper[4765]: I0121 13:21:47.863236 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r4f5t" Jan 21 13:21:48 crc kubenswrapper[4765]: I0121 13:21:48.165780 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6h2b4" event={"ID":"a300493b-663b-4b7e-b2b7-890abcca42dd","Type":"ContainerStarted","Data":"d009b9f894aa36c4e00347b4d3b13245fc4dd5885d1f604e727ee933703c0189"} Jan 21 13:21:48 crc kubenswrapper[4765]: I0121 13:21:48.362489 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r4f5t"] Jan 21 13:21:49 crc kubenswrapper[4765]: I0121 13:21:49.175900 4765 generic.go:334] "Generic (PLEG): container finished" podID="60abe159-7e5d-4586-9d1b-0050de42edbe" containerID="0ce2d7fdf18b9a1854c96293b2d18c16945726e72e36f640673be66d4a0c9797" exitCode=0 Jan 21 13:21:49 crc kubenswrapper[4765]: I0121 13:21:49.175967 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j5v45" event={"ID":"60abe159-7e5d-4586-9d1b-0050de42edbe","Type":"ContainerDied","Data":"0ce2d7fdf18b9a1854c96293b2d18c16945726e72e36f640673be66d4a0c9797"} Jan 21 13:21:49 crc kubenswrapper[4765]: I0121 13:21:49.179699 4765 generic.go:334] "Generic (PLEG): container finished" podID="b7b8c79c-6093-496c-86e2-fe9dafebe84a" containerID="4d6767471925da961e268db4e379427ead8911869ddb04bf8fbf5ba5b3a25524" exitCode=0 Jan 21 13:21:49 crc kubenswrapper[4765]: I0121 13:21:49.179750 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r4f5t" event={"ID":"b7b8c79c-6093-496c-86e2-fe9dafebe84a","Type":"ContainerDied","Data":"4d6767471925da961e268db4e379427ead8911869ddb04bf8fbf5ba5b3a25524"} Jan 21 13:21:49 crc kubenswrapper[4765]: I0121 13:21:49.179777 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r4f5t" event={"ID":"b7b8c79c-6093-496c-86e2-fe9dafebe84a","Type":"ContainerStarted","Data":"168b6fa7e541474449c8c640ecb7db8d657c63d801025a8324a188618cda3d5c"} Jan 21 13:21:49 crc kubenswrapper[4765]: I0121 13:21:49.344016 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.073901 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.087084 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/89b81f15-19f3-4dab-9b2d-fa41b2eab844-etc-swift\") pod \"swift-storage-0\" (UID: \"89b81f15-19f3-4dab-9b2d-fa41b2eab844\") " pod="openstack/swift-storage-0" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.141330 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.668577 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.684388 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-swiftconf\") pod \"60abe159-7e5d-4586-9d1b-0050de42edbe\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.684467 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwllw\" (UniqueName: \"kubernetes.io/projected/60abe159-7e5d-4586-9d1b-0050de42edbe-kube-api-access-gwllw\") pod \"60abe159-7e5d-4586-9d1b-0050de42edbe\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.684484 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-combined-ca-bundle\") pod \"60abe159-7e5d-4586-9d1b-0050de42edbe\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.684546 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60abe159-7e5d-4586-9d1b-0050de42edbe-ring-data-devices\") pod \"60abe159-7e5d-4586-9d1b-0050de42edbe\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.684583 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-dispersionconf\") pod \"60abe159-7e5d-4586-9d1b-0050de42edbe\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.684628 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60abe159-7e5d-4586-9d1b-0050de42edbe-etc-swift\") pod \"60abe159-7e5d-4586-9d1b-0050de42edbe\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.684679 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60abe159-7e5d-4586-9d1b-0050de42edbe-scripts\") pod \"60abe159-7e5d-4586-9d1b-0050de42edbe\" (UID: \"60abe159-7e5d-4586-9d1b-0050de42edbe\") " Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.685651 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60abe159-7e5d-4586-9d1b-0050de42edbe-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "60abe159-7e5d-4586-9d1b-0050de42edbe" (UID: "60abe159-7e5d-4586-9d1b-0050de42edbe"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.687257 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60abe159-7e5d-4586-9d1b-0050de42edbe-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "60abe159-7e5d-4586-9d1b-0050de42edbe" (UID: "60abe159-7e5d-4586-9d1b-0050de42edbe"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.711009 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r4f5t" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.719012 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60abe159-7e5d-4586-9d1b-0050de42edbe-kube-api-access-gwllw" (OuterVolumeSpecName: "kube-api-access-gwllw") pod "60abe159-7e5d-4586-9d1b-0050de42edbe" (UID: "60abe159-7e5d-4586-9d1b-0050de42edbe"). InnerVolumeSpecName "kube-api-access-gwllw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.723311 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "60abe159-7e5d-4586-9d1b-0050de42edbe" (UID: "60abe159-7e5d-4586-9d1b-0050de42edbe"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.725352 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60abe159-7e5d-4586-9d1b-0050de42edbe-scripts" (OuterVolumeSpecName: "scripts") pod "60abe159-7e5d-4586-9d1b-0050de42edbe" (UID: "60abe159-7e5d-4586-9d1b-0050de42edbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.741938 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "60abe159-7e5d-4586-9d1b-0050de42edbe" (UID: "60abe159-7e5d-4586-9d1b-0050de42edbe"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.757478 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60abe159-7e5d-4586-9d1b-0050de42edbe" (UID: "60abe159-7e5d-4586-9d1b-0050de42edbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.790016 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmm7z\" (UniqueName: \"kubernetes.io/projected/b7b8c79c-6093-496c-86e2-fe9dafebe84a-kube-api-access-jmm7z\") pod \"b7b8c79c-6093-496c-86e2-fe9dafebe84a\" (UID: \"b7b8c79c-6093-496c-86e2-fe9dafebe84a\") " Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.790243 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b8c79c-6093-496c-86e2-fe9dafebe84a-operator-scripts\") pod \"b7b8c79c-6093-496c-86e2-fe9dafebe84a\" (UID: \"b7b8c79c-6093-496c-86e2-fe9dafebe84a\") " Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.790582 4765 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60abe159-7e5d-4586-9d1b-0050de42edbe-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.790600 4765 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.790614 4765 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60abe159-7e5d-4586-9d1b-0050de42edbe-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.790631 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60abe159-7e5d-4586-9d1b-0050de42edbe-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.790644 4765 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.790659 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwllw\" (UniqueName: \"kubernetes.io/projected/60abe159-7e5d-4586-9d1b-0050de42edbe-kube-api-access-gwllw\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.790673 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60abe159-7e5d-4586-9d1b-0050de42edbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.790972 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7b8c79c-6093-496c-86e2-fe9dafebe84a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7b8c79c-6093-496c-86e2-fe9dafebe84a" (UID: "b7b8c79c-6093-496c-86e2-fe9dafebe84a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.794563 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b8c79c-6093-496c-86e2-fe9dafebe84a-kube-api-access-jmm7z" (OuterVolumeSpecName: "kube-api-access-jmm7z") pod "b7b8c79c-6093-496c-86e2-fe9dafebe84a" (UID: "b7b8c79c-6093-496c-86e2-fe9dafebe84a"). InnerVolumeSpecName "kube-api-access-jmm7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.892518 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmm7z\" (UniqueName: \"kubernetes.io/projected/b7b8c79c-6093-496c-86e2-fe9dafebe84a-kube-api-access-jmm7z\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:50 crc kubenswrapper[4765]: I0121 13:21:50.892555 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b8c79c-6093-496c-86e2-fe9dafebe84a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:51 crc kubenswrapper[4765]: I0121 13:21:51.196660 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j5v45" Jan 21 13:21:51 crc kubenswrapper[4765]: I0121 13:21:51.196825 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j5v45" event={"ID":"60abe159-7e5d-4586-9d1b-0050de42edbe","Type":"ContainerDied","Data":"f63f5d984044dcb02f8af93f0c021fb28cda9ddf5a74adb056ffe1051fc5499c"} Jan 21 13:21:51 crc kubenswrapper[4765]: I0121 13:21:51.197180 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f63f5d984044dcb02f8af93f0c021fb28cda9ddf5a74adb056ffe1051fc5499c" Jan 21 13:21:51 crc kubenswrapper[4765]: I0121 13:21:51.199302 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 13:21:51 crc kubenswrapper[4765]: I0121 13:21:51.201886 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r4f5t" event={"ID":"b7b8c79c-6093-496c-86e2-fe9dafebe84a","Type":"ContainerDied","Data":"168b6fa7e541474449c8c640ecb7db8d657c63d801025a8324a188618cda3d5c"} Jan 21 13:21:51 crc kubenswrapper[4765]: I0121 13:21:51.201927 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="168b6fa7e541474449c8c640ecb7db8d657c63d801025a8324a188618cda3d5c" Jan 21 13:21:51 crc kubenswrapper[4765]: I0121 13:21:51.201950 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r4f5t" Jan 21 13:21:52 crc kubenswrapper[4765]: I0121 13:21:52.213693 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"c79fd6f73137a0950d185e1849256a4c993e2405835f7d6c56be063a5d35935d"} Jan 21 13:21:53 crc kubenswrapper[4765]: I0121 13:21:53.234961 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"f1c29147c2b9e3995b377352ebb06648e1a0a4cab5a3f63f578f22ca8a9bbdae"} Jan 21 13:21:53 crc kubenswrapper[4765]: I0121 13:21:53.235363 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"66368e0542eebceca3368f7fb74e2bf2433fc9bebf94e4faeeb9327d60ba8bd4"} Jan 21 13:21:53 crc kubenswrapper[4765]: I0121 13:21:53.235378 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"b786dd1dd75e71548b4390021f6ec35d9a10565c7f480db28ba494c8b9d21fe8"} Jan 21 13:21:53 crc kubenswrapper[4765]: I0121 13:21:53.821656 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-r4f5t"] Jan 21 13:21:53 crc kubenswrapper[4765]: I0121 13:21:53.831873 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-r4f5t"] Jan 21 13:21:54 crc kubenswrapper[4765]: I0121 13:21:54.252138 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"8a8e6489ad9801d7bb0a02928e774d16365728ebc8411ca4be8b39a556ddfc4e"} Jan 21 13:21:55 crc kubenswrapper[4765]: I0121 13:21:55.629759 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b8c79c-6093-496c-86e2-fe9dafebe84a" path="/var/lib/kubelet/pods/b7b8c79c-6093-496c-86e2-fe9dafebe84a/volumes" Jan 21 13:21:55 crc kubenswrapper[4765]: I0121 13:21:55.917996 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gkqpl" podUID="acf0ca9c-abda-4c3b-98d3-ca3e6189434a" containerName="ovn-controller" probeResult="failure" output=< Jan 21 13:21:55 crc kubenswrapper[4765]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 13:21:55 crc kubenswrapper[4765]: > Jan 21 13:21:58 crc kubenswrapper[4765]: I0121 13:21:58.849643 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9nwh5"] Jan 21 13:21:58 crc kubenswrapper[4765]: E0121 13:21:58.850741 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60abe159-7e5d-4586-9d1b-0050de42edbe" containerName="swift-ring-rebalance" Jan 21 13:21:58 crc kubenswrapper[4765]: I0121 13:21:58.850758 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="60abe159-7e5d-4586-9d1b-0050de42edbe" containerName="swift-ring-rebalance" Jan 21 13:21:58 crc kubenswrapper[4765]: E0121 13:21:58.850776 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b8c79c-6093-496c-86e2-fe9dafebe84a" containerName="mariadb-account-create-update" Jan 21 13:21:58 crc kubenswrapper[4765]: I0121 13:21:58.850784 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b8c79c-6093-496c-86e2-fe9dafebe84a" containerName="mariadb-account-create-update" Jan 21 13:21:58 crc kubenswrapper[4765]: I0121 13:21:58.850992 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b8c79c-6093-496c-86e2-fe9dafebe84a" containerName="mariadb-account-create-update" Jan 21 13:21:58 crc kubenswrapper[4765]: I0121 13:21:58.851016 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="60abe159-7e5d-4586-9d1b-0050de42edbe" containerName="swift-ring-rebalance" Jan 21 13:21:58 crc kubenswrapper[4765]: I0121 13:21:58.853303 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9nwh5" Jan 21 13:21:58 crc kubenswrapper[4765]: I0121 13:21:58.859731 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9nwh5"] Jan 21 13:21:58 crc kubenswrapper[4765]: I0121 13:21:58.859817 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 13:21:58 crc kubenswrapper[4765]: I0121 13:21:58.950803 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psj72\" (UniqueName: \"kubernetes.io/projected/89ea0c05-6e45-48ca-a687-79c9e4cbc084-kube-api-access-psj72\") pod \"root-account-create-update-9nwh5\" (UID: \"89ea0c05-6e45-48ca-a687-79c9e4cbc084\") " pod="openstack/root-account-create-update-9nwh5" Jan 21 13:21:58 crc kubenswrapper[4765]: I0121 13:21:58.950972 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ea0c05-6e45-48ca-a687-79c9e4cbc084-operator-scripts\") pod \"root-account-create-update-9nwh5\" (UID: \"89ea0c05-6e45-48ca-a687-79c9e4cbc084\") " pod="openstack/root-account-create-update-9nwh5" Jan 21 13:21:59 crc kubenswrapper[4765]: I0121 13:21:59.052928 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ea0c05-6e45-48ca-a687-79c9e4cbc084-operator-scripts\") pod \"root-account-create-update-9nwh5\" (UID: \"89ea0c05-6e45-48ca-a687-79c9e4cbc084\") " pod="openstack/root-account-create-update-9nwh5" Jan 21 13:21:59 crc kubenswrapper[4765]: I0121 13:21:59.053024 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psj72\" (UniqueName: \"kubernetes.io/projected/89ea0c05-6e45-48ca-a687-79c9e4cbc084-kube-api-access-psj72\") pod \"root-account-create-update-9nwh5\" (UID: \"89ea0c05-6e45-48ca-a687-79c9e4cbc084\") " pod="openstack/root-account-create-update-9nwh5" Jan 21 13:21:59 crc kubenswrapper[4765]: I0121 13:21:59.054286 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ea0c05-6e45-48ca-a687-79c9e4cbc084-operator-scripts\") pod \"root-account-create-update-9nwh5\" (UID: \"89ea0c05-6e45-48ca-a687-79c9e4cbc084\") " pod="openstack/root-account-create-update-9nwh5" Jan 21 13:21:59 crc kubenswrapper[4765]: I0121 13:21:59.072705 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psj72\" (UniqueName: \"kubernetes.io/projected/89ea0c05-6e45-48ca-a687-79c9e4cbc084-kube-api-access-psj72\") pod \"root-account-create-update-9nwh5\" (UID: \"89ea0c05-6e45-48ca-a687-79c9e4cbc084\") " pod="openstack/root-account-create-update-9nwh5" Jan 21 13:21:59 crc kubenswrapper[4765]: I0121 13:21:59.185988 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9nwh5" Jan 21 13:21:59 crc kubenswrapper[4765]: I0121 13:21:59.320356 4765 generic.go:334] "Generic (PLEG): container finished" podID="054275fd-f5b9-4326-98a3-af2cc1d76c17" containerID="7dcc51364c36973f1ebc49e3c990ab016165b1bb8ac45a8169fac12e8e7360f4" exitCode=0 Jan 21 13:21:59 crc kubenswrapper[4765]: I0121 13:21:59.320705 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"054275fd-f5b9-4326-98a3-af2cc1d76c17","Type":"ContainerDied","Data":"7dcc51364c36973f1ebc49e3c990ab016165b1bb8ac45a8169fac12e8e7360f4"} Jan 21 13:21:59 crc kubenswrapper[4765]: I0121 13:21:59.329828 4765 generic.go:334] "Generic (PLEG): container finished" podID="4d783178-0ea7-4643-802f-d56722e1df7d" containerID="4616ef97539fc8112f0373c108ede44e8bc6f6f97bc36b1ff01a83991a083f75" exitCode=0 Jan 21 13:21:59 crc kubenswrapper[4765]: I0121 13:21:59.329880 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d783178-0ea7-4643-802f-d56722e1df7d","Type":"ContainerDied","Data":"4616ef97539fc8112f0373c108ede44e8bc6f6f97bc36b1ff01a83991a083f75"} Jan 21 13:22:00 crc kubenswrapper[4765]: I0121 13:22:00.945195 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gkqpl" podUID="acf0ca9c-abda-4c3b-98d3-ca3e6189434a" containerName="ovn-controller" probeResult="failure" output=< Jan 21 13:22:00 crc kubenswrapper[4765]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 13:22:00 crc kubenswrapper[4765]: > Jan 21 13:22:00 crc kubenswrapper[4765]: I0121 13:22:00.961620 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.026874 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-64shj" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.138010 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9nwh5"] Jan 21 13:22:01 crc kubenswrapper[4765]: W0121 13:22:01.147833 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89ea0c05_6e45_48ca_a687_79c9e4cbc084.slice/crio-a8bc5b5114c36e4f4d29801d93cb7ace478b0d379258b9f831f3369460a685e7 WatchSource:0}: Error finding container a8bc5b5114c36e4f4d29801d93cb7ace478b0d379258b9f831f3369460a685e7: Status 404 returned error can't find the container with id a8bc5b5114c36e4f4d29801d93cb7ace478b0d379258b9f831f3369460a685e7 Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.273600 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gkqpl-config-mktqh"] Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.274960 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.281483 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.302707 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-run-ovn\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.302750 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-run\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.302818 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f0b20d84-fd95-4258-aac7-eed7e5ee5128-additional-scripts\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.302887 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcjhh\" (UniqueName: \"kubernetes.io/projected/f0b20d84-fd95-4258-aac7-eed7e5ee5128-kube-api-access-mcjhh\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.302914 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-log-ovn\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.302950 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0b20d84-fd95-4258-aac7-eed7e5ee5128-scripts\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.323736 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gkqpl-config-mktqh"] Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.366151 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d783178-0ea7-4643-802f-d56722e1df7d","Type":"ContainerStarted","Data":"85748d994c8b907b866b52a387ecb62d3fb2d52f35909390b09cc0acf091d06e"} Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.367502 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.377146 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9nwh5" event={"ID":"89ea0c05-6e45-48ca-a687-79c9e4cbc084","Type":"ContainerStarted","Data":"a8bc5b5114c36e4f4d29801d93cb7ace478b0d379258b9f831f3369460a685e7"} Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.381286 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"054275fd-f5b9-4326-98a3-af2cc1d76c17","Type":"ContainerStarted","Data":"86eb1244c7d3b1abc5524f76b3df354eda942ce6e12f45e000ae681bccd46da4"} Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.381985 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.404647 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-run-ovn\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.404696 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-run\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.404766 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f0b20d84-fd95-4258-aac7-eed7e5ee5128-additional-scripts\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.404788 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcjhh\" (UniqueName: \"kubernetes.io/projected/f0b20d84-fd95-4258-aac7-eed7e5ee5128-kube-api-access-mcjhh\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.404812 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-log-ovn\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.404847 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0b20d84-fd95-4258-aac7-eed7e5ee5128-scripts\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.405164 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-run\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.405260 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-log-ovn\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.405309 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-run-ovn\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.405813 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f0b20d84-fd95-4258-aac7-eed7e5ee5128-additional-scripts\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.409909 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0b20d84-fd95-4258-aac7-eed7e5ee5128-scripts\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.436530 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.790610558 podStartE2EDuration="1m25.436514071s" podCreationTimestamp="2026-01-21 13:20:36 +0000 UTC" firstStartedPulling="2026-01-21 13:20:38.252053764 +0000 UTC m=+1099.269779586" lastFinishedPulling="2026-01-21 13:21:24.897957277 +0000 UTC m=+1145.915683099" observedRunningTime="2026-01-21 13:22:01.434031799 +0000 UTC m=+1182.451757621" watchObservedRunningTime="2026-01-21 13:22:01.436514071 +0000 UTC m=+1182.454239893" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.439639 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.217362492 podStartE2EDuration="1m25.439620263s" podCreationTimestamp="2026-01-21 13:20:36 +0000 UTC" firstStartedPulling="2026-01-21 13:20:38.687291137 +0000 UTC m=+1099.705016959" lastFinishedPulling="2026-01-21 13:21:24.909548908 +0000 UTC m=+1145.927274730" observedRunningTime="2026-01-21 13:22:01.400186094 +0000 UTC m=+1182.417911916" watchObservedRunningTime="2026-01-21 13:22:01.439620263 +0000 UTC m=+1182.457346085" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.449409 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcjhh\" (UniqueName: \"kubernetes.io/projected/f0b20d84-fd95-4258-aac7-eed7e5ee5128-kube-api-access-mcjhh\") pod \"ovn-controller-gkqpl-config-mktqh\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:01 crc kubenswrapper[4765]: I0121 13:22:01.645489 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:02 crc kubenswrapper[4765]: I0121 13:22:02.332172 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gkqpl-config-mktqh"] Jan 21 13:22:02 crc kubenswrapper[4765]: W0121 13:22:02.333315 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b20d84_fd95_4258_aac7_eed7e5ee5128.slice/crio-c305cab8b10e62c6e90fde44753c4409ac9cc27f592917b4a311a31ccb7e82aa WatchSource:0}: Error finding container c305cab8b10e62c6e90fde44753c4409ac9cc27f592917b4a311a31ccb7e82aa: Status 404 returned error can't find the container with id c305cab8b10e62c6e90fde44753c4409ac9cc27f592917b4a311a31ccb7e82aa Jan 21 13:22:02 crc kubenswrapper[4765]: I0121 13:22:02.417511 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gkqpl-config-mktqh" event={"ID":"f0b20d84-fd95-4258-aac7-eed7e5ee5128","Type":"ContainerStarted","Data":"c305cab8b10e62c6e90fde44753c4409ac9cc27f592917b4a311a31ccb7e82aa"} Jan 21 13:22:02 crc kubenswrapper[4765]: I0121 13:22:02.425028 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"88bbb4ea7c3001acc4197609739a9fb25853196ee53986dc365315a74869fcc8"} Jan 21 13:22:02 crc kubenswrapper[4765]: I0121 13:22:02.425313 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"4d64203bfc757c6ae1fc65b8c9449a289ce06018b4647c110ed185eb25eba687"} Jan 21 13:22:02 crc kubenswrapper[4765]: I0121 13:22:02.425405 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"f516657fb9e1521f9992bb2c40b22a04362df22de51af00117d4d13e51c823cd"} Jan 21 13:22:02 crc kubenswrapper[4765]: I0121 13:22:02.426619 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6h2b4" event={"ID":"a300493b-663b-4b7e-b2b7-890abcca42dd","Type":"ContainerStarted","Data":"9fa7c6b73f21e838816589ad4c9d85a7805eea241a59ca34be4aa103ee7feafd"} Jan 21 13:22:02 crc kubenswrapper[4765]: I0121 13:22:02.434348 4765 generic.go:334] "Generic (PLEG): container finished" podID="89ea0c05-6e45-48ca-a687-79c9e4cbc084" containerID="eac1f8c5bce8f14d00e35df158711d3bff75eaee987c811b4b57febe1072b525" exitCode=0 Jan 21 13:22:02 crc kubenswrapper[4765]: I0121 13:22:02.434722 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9nwh5" event={"ID":"89ea0c05-6e45-48ca-a687-79c9e4cbc084","Type":"ContainerDied","Data":"eac1f8c5bce8f14d00e35df158711d3bff75eaee987c811b4b57febe1072b525"} Jan 21 13:22:02 crc kubenswrapper[4765]: I0121 13:22:02.487064 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6h2b4" podStartSLOduration=2.959948887 podStartE2EDuration="16.487039956s" podCreationTimestamp="2026-01-21 13:21:46 +0000 UTC" firstStartedPulling="2026-01-21 13:21:47.256945748 +0000 UTC m=+1168.274671560" lastFinishedPulling="2026-01-21 13:22:00.784036807 +0000 UTC m=+1181.801762629" observedRunningTime="2026-01-21 13:22:02.470627114 +0000 UTC m=+1183.488352936" watchObservedRunningTime="2026-01-21 13:22:02.487039956 +0000 UTC m=+1183.504765778" Jan 21 13:22:03 crc kubenswrapper[4765]: I0121 13:22:03.445805 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"884aaa79647132221ee7ad162fbc3841f01f5f169f5b78faca20daccbca9637c"} Jan 21 13:22:03 crc kubenswrapper[4765]: I0121 13:22:03.450582 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gkqpl-config-mktqh" event={"ID":"f0b20d84-fd95-4258-aac7-eed7e5ee5128","Type":"ContainerStarted","Data":"c342cbc167565b0b099a201b8cb67b39137ac2bc568d29c9336e560cfdf9616d"} Jan 21 13:22:03 crc kubenswrapper[4765]: I0121 13:22:03.477663 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gkqpl-config-mktqh" podStartSLOduration=2.47764263 podStartE2EDuration="2.47764263s" podCreationTimestamp="2026-01-21 13:22:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:22:03.476367173 +0000 UTC m=+1184.494093005" watchObservedRunningTime="2026-01-21 13:22:03.47764263 +0000 UTC m=+1184.495368452" Jan 21 13:22:03 crc kubenswrapper[4765]: I0121 13:22:03.770765 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9nwh5" Jan 21 13:22:03 crc kubenswrapper[4765]: I0121 13:22:03.852395 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psj72\" (UniqueName: \"kubernetes.io/projected/89ea0c05-6e45-48ca-a687-79c9e4cbc084-kube-api-access-psj72\") pod \"89ea0c05-6e45-48ca-a687-79c9e4cbc084\" (UID: \"89ea0c05-6e45-48ca-a687-79c9e4cbc084\") " Jan 21 13:22:03 crc kubenswrapper[4765]: I0121 13:22:03.852537 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ea0c05-6e45-48ca-a687-79c9e4cbc084-operator-scripts\") pod \"89ea0c05-6e45-48ca-a687-79c9e4cbc084\" (UID: \"89ea0c05-6e45-48ca-a687-79c9e4cbc084\") " Jan 21 13:22:03 crc kubenswrapper[4765]: I0121 13:22:03.853939 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89ea0c05-6e45-48ca-a687-79c9e4cbc084-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89ea0c05-6e45-48ca-a687-79c9e4cbc084" (UID: "89ea0c05-6e45-48ca-a687-79c9e4cbc084"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:03 crc kubenswrapper[4765]: I0121 13:22:03.862572 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ea0c05-6e45-48ca-a687-79c9e4cbc084-kube-api-access-psj72" (OuterVolumeSpecName: "kube-api-access-psj72") pod "89ea0c05-6e45-48ca-a687-79c9e4cbc084" (UID: "89ea0c05-6e45-48ca-a687-79c9e4cbc084"). InnerVolumeSpecName "kube-api-access-psj72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:03 crc kubenswrapper[4765]: I0121 13:22:03.954180 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psj72\" (UniqueName: \"kubernetes.io/projected/89ea0c05-6e45-48ca-a687-79c9e4cbc084-kube-api-access-psj72\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:03 crc kubenswrapper[4765]: I0121 13:22:03.954239 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ea0c05-6e45-48ca-a687-79c9e4cbc084-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:04 crc kubenswrapper[4765]: I0121 13:22:04.455990 4765 generic.go:334] "Generic (PLEG): container finished" podID="f0b20d84-fd95-4258-aac7-eed7e5ee5128" containerID="c342cbc167565b0b099a201b8cb67b39137ac2bc568d29c9336e560cfdf9616d" exitCode=0 Jan 21 13:22:04 crc kubenswrapper[4765]: I0121 13:22:04.456049 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gkqpl-config-mktqh" event={"ID":"f0b20d84-fd95-4258-aac7-eed7e5ee5128","Type":"ContainerDied","Data":"c342cbc167565b0b099a201b8cb67b39137ac2bc568d29c9336e560cfdf9616d"} Jan 21 13:22:04 crc kubenswrapper[4765]: I0121 13:22:04.458238 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9nwh5" event={"ID":"89ea0c05-6e45-48ca-a687-79c9e4cbc084","Type":"ContainerDied","Data":"a8bc5b5114c36e4f4d29801d93cb7ace478b0d379258b9f831f3369460a685e7"} Jan 21 13:22:04 crc kubenswrapper[4765]: I0121 13:22:04.458272 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8bc5b5114c36e4f4d29801d93cb7ace478b0d379258b9f831f3369460a685e7" Jan 21 13:22:04 crc kubenswrapper[4765]: I0121 13:22:04.458304 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9nwh5" Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.795726 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.885040 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-log-ovn\") pod \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.885119 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-run\") pod \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.885173 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f0b20d84-fd95-4258-aac7-eed7e5ee5128-additional-scripts\") pod \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.885306 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0b20d84-fd95-4258-aac7-eed7e5ee5128-scripts\") pod \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.885341 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcjhh\" (UniqueName: \"kubernetes.io/projected/f0b20d84-fd95-4258-aac7-eed7e5ee5128-kube-api-access-mcjhh\") pod \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.885467 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-run-ovn\") pod \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\" (UID: \"f0b20d84-fd95-4258-aac7-eed7e5ee5128\") " Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.885916 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f0b20d84-fd95-4258-aac7-eed7e5ee5128" (UID: "f0b20d84-fd95-4258-aac7-eed7e5ee5128"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.885957 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f0b20d84-fd95-4258-aac7-eed7e5ee5128" (UID: "f0b20d84-fd95-4258-aac7-eed7e5ee5128"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.885976 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-run" (OuterVolumeSpecName: "var-run") pod "f0b20d84-fd95-4258-aac7-eed7e5ee5128" (UID: "f0b20d84-fd95-4258-aac7-eed7e5ee5128"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.886718 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b20d84-fd95-4258-aac7-eed7e5ee5128-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f0b20d84-fd95-4258-aac7-eed7e5ee5128" (UID: "f0b20d84-fd95-4258-aac7-eed7e5ee5128"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.887551 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b20d84-fd95-4258-aac7-eed7e5ee5128-scripts" (OuterVolumeSpecName: "scripts") pod "f0b20d84-fd95-4258-aac7-eed7e5ee5128" (UID: "f0b20d84-fd95-4258-aac7-eed7e5ee5128"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.910562 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0b20d84-fd95-4258-aac7-eed7e5ee5128-kube-api-access-mcjhh" (OuterVolumeSpecName: "kube-api-access-mcjhh") pod "f0b20d84-fd95-4258-aac7-eed7e5ee5128" (UID: "f0b20d84-fd95-4258-aac7-eed7e5ee5128"). InnerVolumeSpecName "kube-api-access-mcjhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.964925 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-gkqpl" Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.987534 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0b20d84-fd95-4258-aac7-eed7e5ee5128-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.987572 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcjhh\" (UniqueName: \"kubernetes.io/projected/f0b20d84-fd95-4258-aac7-eed7e5ee5128-kube-api-access-mcjhh\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.987771 4765 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.987781 4765 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.987792 4765 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f0b20d84-fd95-4258-aac7-eed7e5ee5128-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:05 crc kubenswrapper[4765]: I0121 13:22:05.987803 4765 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f0b20d84-fd95-4258-aac7-eed7e5ee5128-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:06 crc kubenswrapper[4765]: I0121 13:22:06.475471 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gkqpl-config-mktqh" event={"ID":"f0b20d84-fd95-4258-aac7-eed7e5ee5128","Type":"ContainerDied","Data":"c305cab8b10e62c6e90fde44753c4409ac9cc27f592917b4a311a31ccb7e82aa"} Jan 21 13:22:06 crc kubenswrapper[4765]: I0121 13:22:06.475517 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c305cab8b10e62c6e90fde44753c4409ac9cc27f592917b4a311a31ccb7e82aa" Jan 21 13:22:06 crc kubenswrapper[4765]: I0121 13:22:06.475568 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gkqpl-config-mktqh" Jan 21 13:22:06 crc kubenswrapper[4765]: I0121 13:22:06.584194 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gkqpl-config-mktqh"] Jan 21 13:22:06 crc kubenswrapper[4765]: I0121 13:22:06.592504 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gkqpl-config-mktqh"] Jan 21 13:22:07 crc kubenswrapper[4765]: I0121 13:22:07.489760 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"58ae72419a24339696ea0e56330b59c2920fccf703b7b5294f66b4057efef105"} Jan 21 13:22:07 crc kubenswrapper[4765]: I0121 13:22:07.490120 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"b3b83425b9222894f7b87cbb303d2f68e272fd59c74eb402c9a5e0606803e3f0"} Jan 21 13:22:07 crc kubenswrapper[4765]: I0121 13:22:07.490136 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"764a60496de520def1622ae6ca72a1f3a66142a0d788ed1ca4163eefbb523885"} Jan 21 13:22:07 crc kubenswrapper[4765]: I0121 13:22:07.626697 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0b20d84-fd95-4258-aac7-eed7e5ee5128" path="/var/lib/kubelet/pods/f0b20d84-fd95-4258-aac7-eed7e5ee5128/volumes" Jan 21 13:22:08 crc kubenswrapper[4765]: I0121 13:22:08.503729 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"3066336a16234ee20e3f592dd64bd885272d899cffacfa6d3cb3009a01864de7"} Jan 21 13:22:08 crc kubenswrapper[4765]: I0121 13:22:08.503772 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"992b6c08b5b23ffcbd4a522aa142645b63953576d7a6c6acf872004cb116a2ad"} Jan 21 13:22:08 crc kubenswrapper[4765]: I0121 13:22:08.503783 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"44ca8bb8ded1d8ec090fefb9ff74ee6f1ab50e617a7f84575947e8a9d2068f4a"} Jan 21 13:22:08 crc kubenswrapper[4765]: I0121 13:22:08.503793 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"89b81f15-19f3-4dab-9b2d-fa41b2eab844","Type":"ContainerStarted","Data":"0fff7483ef76e9daa36cb85de80fa6bc18a017e41aa08dedbb6a773dd3eda607"} Jan 21 13:22:08 crc kubenswrapper[4765]: I0121 13:22:08.538329 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.95147064 podStartE2EDuration="36.538305836s" podCreationTimestamp="2026-01-21 13:21:32 +0000 UTC" firstStartedPulling="2026-01-21 13:21:51.217288216 +0000 UTC m=+1172.235014038" lastFinishedPulling="2026-01-21 13:22:06.804123412 +0000 UTC m=+1187.821849234" observedRunningTime="2026-01-21 13:22:08.535280177 +0000 UTC m=+1189.553006019" watchObservedRunningTime="2026-01-21 13:22:08.538305836 +0000 UTC m=+1189.556031658" Jan 21 13:22:08 crc kubenswrapper[4765]: I0121 13:22:08.951536 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-vlb25"] Jan 21 13:22:08 crc kubenswrapper[4765]: E0121 13:22:08.952113 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ea0c05-6e45-48ca-a687-79c9e4cbc084" containerName="mariadb-account-create-update" Jan 21 13:22:08 crc kubenswrapper[4765]: I0121 13:22:08.952129 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ea0c05-6e45-48ca-a687-79c9e4cbc084" containerName="mariadb-account-create-update" Jan 21 13:22:08 crc kubenswrapper[4765]: E0121 13:22:08.952160 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b20d84-fd95-4258-aac7-eed7e5ee5128" containerName="ovn-config" Jan 21 13:22:08 crc kubenswrapper[4765]: I0121 13:22:08.952166 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b20d84-fd95-4258-aac7-eed7e5ee5128" containerName="ovn-config" Jan 21 13:22:08 crc kubenswrapper[4765]: I0121 13:22:08.952336 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ea0c05-6e45-48ca-a687-79c9e4cbc084" containerName="mariadb-account-create-update" Jan 21 13:22:08 crc kubenswrapper[4765]: I0121 13:22:08.952346 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b20d84-fd95-4258-aac7-eed7e5ee5128" containerName="ovn-config" Jan 21 13:22:08 crc kubenswrapper[4765]: I0121 13:22:08.953125 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:08 crc kubenswrapper[4765]: I0121 13:22:08.955873 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 21 13:22:08 crc kubenswrapper[4765]: I0121 13:22:08.973678 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-vlb25"] Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.039996 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.040059 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.040080 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z67hp\" (UniqueName: \"kubernetes.io/projected/537d2855-a36c-4e32-bddc-ae0db8a757a3-kube-api-access-z67hp\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.040249 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.040371 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.040516 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-config\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.142517 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.142838 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.142981 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-config\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.143082 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.143196 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.144052 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z67hp\" (UniqueName: \"kubernetes.io/projected/537d2855-a36c-4e32-bddc-ae0db8a757a3-kube-api-access-z67hp\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.143760 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.143979 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.143998 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-config\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.144003 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.143572 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.165074 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z67hp\" (UniqueName: \"kubernetes.io/projected/537d2855-a36c-4e32-bddc-ae0db8a757a3-kube-api-access-z67hp\") pod \"dnsmasq-dns-5c79d794d7-vlb25\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.284203 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:09 crc kubenswrapper[4765]: I0121 13:22:09.852728 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-vlb25"] Jan 21 13:22:10 crc kubenswrapper[4765]: I0121 13:22:10.533604 4765 generic.go:334] "Generic (PLEG): container finished" podID="537d2855-a36c-4e32-bddc-ae0db8a757a3" containerID="68207a23a16c8bfe6de4dd01efa4972055e83f8530670490d5198d8bcdba8cb3" exitCode=0 Jan 21 13:22:10 crc kubenswrapper[4765]: I0121 13:22:10.534093 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" event={"ID":"537d2855-a36c-4e32-bddc-ae0db8a757a3","Type":"ContainerDied","Data":"68207a23a16c8bfe6de4dd01efa4972055e83f8530670490d5198d8bcdba8cb3"} Jan 21 13:22:10 crc kubenswrapper[4765]: I0121 13:22:10.534143 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" event={"ID":"537d2855-a36c-4e32-bddc-ae0db8a757a3","Type":"ContainerStarted","Data":"ac4409fa9e973288a03100ff24c2ac47ef8db7e9be54b716150d0675a430fa60"} Jan 21 13:22:11 crc kubenswrapper[4765]: I0121 13:22:11.552145 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" event={"ID":"537d2855-a36c-4e32-bddc-ae0db8a757a3","Type":"ContainerStarted","Data":"7fa52fa651a1398daa3f6b4d6078daa04812674f11407bf32845ecea773a8986"} Jan 21 13:22:11 crc kubenswrapper[4765]: I0121 13:22:11.552688 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:11 crc kubenswrapper[4765]: I0121 13:22:11.579450 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" podStartSLOduration=3.579415985 podStartE2EDuration="3.579415985s" podCreationTimestamp="2026-01-21 13:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:22:11.571604676 +0000 UTC m=+1192.589330498" watchObservedRunningTime="2026-01-21 13:22:11.579415985 +0000 UTC m=+1192.597141817" Jan 21 13:22:13 crc kubenswrapper[4765]: I0121 13:22:13.568754 4765 generic.go:334] "Generic (PLEG): container finished" podID="a300493b-663b-4b7e-b2b7-890abcca42dd" containerID="9fa7c6b73f21e838816589ad4c9d85a7805eea241a59ca34be4aa103ee7feafd" exitCode=0 Jan 21 13:22:13 crc kubenswrapper[4765]: I0121 13:22:13.568855 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6h2b4" event={"ID":"a300493b-663b-4b7e-b2b7-890abcca42dd","Type":"ContainerDied","Data":"9fa7c6b73f21e838816589ad4c9d85a7805eea241a59ca34be4aa103ee7feafd"} Jan 21 13:22:14 crc kubenswrapper[4765]: I0121 13:22:14.446367 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:22:14 crc kubenswrapper[4765]: I0121 13:22:14.446717 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:22:14 crc kubenswrapper[4765]: I0121 13:22:14.992078 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6h2b4" Jan 21 13:22:15 crc kubenswrapper[4765]: I0121 13:22:15.063029 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-config-data\") pod \"a300493b-663b-4b7e-b2b7-890abcca42dd\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " Jan 21 13:22:15 crc kubenswrapper[4765]: I0121 13:22:15.064055 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn8k4\" (UniqueName: \"kubernetes.io/projected/a300493b-663b-4b7e-b2b7-890abcca42dd-kube-api-access-vn8k4\") pod \"a300493b-663b-4b7e-b2b7-890abcca42dd\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " Jan 21 13:22:15 crc kubenswrapper[4765]: I0121 13:22:15.064403 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-db-sync-config-data\") pod \"a300493b-663b-4b7e-b2b7-890abcca42dd\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " Jan 21 13:22:15 crc kubenswrapper[4765]: I0121 13:22:15.064668 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-combined-ca-bundle\") pod \"a300493b-663b-4b7e-b2b7-890abcca42dd\" (UID: \"a300493b-663b-4b7e-b2b7-890abcca42dd\") " Jan 21 13:22:15 crc kubenswrapper[4765]: I0121 13:22:15.085506 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a300493b-663b-4b7e-b2b7-890abcca42dd-kube-api-access-vn8k4" (OuterVolumeSpecName: "kube-api-access-vn8k4") pod "a300493b-663b-4b7e-b2b7-890abcca42dd" (UID: "a300493b-663b-4b7e-b2b7-890abcca42dd"). InnerVolumeSpecName "kube-api-access-vn8k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:15 crc kubenswrapper[4765]: I0121 13:22:15.087478 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a300493b-663b-4b7e-b2b7-890abcca42dd" (UID: "a300493b-663b-4b7e-b2b7-890abcca42dd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:15 crc kubenswrapper[4765]: I0121 13:22:15.107007 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a300493b-663b-4b7e-b2b7-890abcca42dd" (UID: "a300493b-663b-4b7e-b2b7-890abcca42dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:15 crc kubenswrapper[4765]: I0121 13:22:15.115678 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-config-data" (OuterVolumeSpecName: "config-data") pod "a300493b-663b-4b7e-b2b7-890abcca42dd" (UID: "a300493b-663b-4b7e-b2b7-890abcca42dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:15 crc kubenswrapper[4765]: I0121 13:22:15.167467 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:15 crc kubenswrapper[4765]: I0121 13:22:15.167528 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn8k4\" (UniqueName: \"kubernetes.io/projected/a300493b-663b-4b7e-b2b7-890abcca42dd-kube-api-access-vn8k4\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:15 crc kubenswrapper[4765]: I0121 13:22:15.167555 4765 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:15 crc kubenswrapper[4765]: I0121 13:22:15.167573 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a300493b-663b-4b7e-b2b7-890abcca42dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:15 crc kubenswrapper[4765]: I0121 13:22:15.586456 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6h2b4" event={"ID":"a300493b-663b-4b7e-b2b7-890abcca42dd","Type":"ContainerDied","Data":"d009b9f894aa36c4e00347b4d3b13245fc4dd5885d1f604e727ee933703c0189"} Jan 21 13:22:15 crc kubenswrapper[4765]: I0121 13:22:15.586496 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d009b9f894aa36c4e00347b4d3b13245fc4dd5885d1f604e727ee933703c0189" Jan 21 13:22:15 crc kubenswrapper[4765]: I0121 13:22:15.586530 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6h2b4" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.096929 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-vlb25"] Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.098604 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.100319 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" podUID="537d2855-a36c-4e32-bddc-ae0db8a757a3" containerName="dnsmasq-dns" containerID="cri-o://7fa52fa651a1398daa3f6b4d6078daa04812674f11407bf32845ecea773a8986" gracePeriod=10 Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.159163 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-w9btc"] Jan 21 13:22:16 crc kubenswrapper[4765]: E0121 13:22:16.159536 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a300493b-663b-4b7e-b2b7-890abcca42dd" containerName="glance-db-sync" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.159548 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="a300493b-663b-4b7e-b2b7-890abcca42dd" containerName="glance-db-sync" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.159737 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="a300493b-663b-4b7e-b2b7-890abcca42dd" containerName="glance-db-sync" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.160676 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.186086 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-w9btc"] Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.291132 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whfwx\" (UniqueName: \"kubernetes.io/projected/121a128d-b52a-4cb6-a62c-34380823877c-kube-api-access-whfwx\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.291487 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.291535 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-config\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.291575 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.291604 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.291629 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.393073 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whfwx\" (UniqueName: \"kubernetes.io/projected/121a128d-b52a-4cb6-a62c-34380823877c-kube-api-access-whfwx\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.393116 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.393142 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-config\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.393164 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.393185 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.393206 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.394158 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.394201 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.394291 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.394758 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.394785 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-config\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.449315 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whfwx\" (UniqueName: \"kubernetes.io/projected/121a128d-b52a-4cb6-a62c-34380823877c-kube-api-access-whfwx\") pod \"dnsmasq-dns-5f59b8f679-w9btc\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.488856 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.605104 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.605700 4765 generic.go:334] "Generic (PLEG): container finished" podID="537d2855-a36c-4e32-bddc-ae0db8a757a3" containerID="7fa52fa651a1398daa3f6b4d6078daa04812674f11407bf32845ecea773a8986" exitCode=0 Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.605737 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" event={"ID":"537d2855-a36c-4e32-bddc-ae0db8a757a3","Type":"ContainerDied","Data":"7fa52fa651a1398daa3f6b4d6078daa04812674f11407bf32845ecea773a8986"} Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.605795 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" event={"ID":"537d2855-a36c-4e32-bddc-ae0db8a757a3","Type":"ContainerDied","Data":"ac4409fa9e973288a03100ff24c2ac47ef8db7e9be54b716150d0675a430fa60"} Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.605812 4765 scope.go:117] "RemoveContainer" containerID="7fa52fa651a1398daa3f6b4d6078daa04812674f11407bf32845ecea773a8986" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.641472 4765 scope.go:117] "RemoveContainer" containerID="68207a23a16c8bfe6de4dd01efa4972055e83f8530670490d5198d8bcdba8cb3" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.701653 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-ovsdbserver-nb\") pod \"537d2855-a36c-4e32-bddc-ae0db8a757a3\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.701750 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z67hp\" (UniqueName: \"kubernetes.io/projected/537d2855-a36c-4e32-bddc-ae0db8a757a3-kube-api-access-z67hp\") pod \"537d2855-a36c-4e32-bddc-ae0db8a757a3\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.708135 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-dns-svc\") pod \"537d2855-a36c-4e32-bddc-ae0db8a757a3\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.708250 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-dns-swift-storage-0\") pod \"537d2855-a36c-4e32-bddc-ae0db8a757a3\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.708318 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-ovsdbserver-sb\") pod \"537d2855-a36c-4e32-bddc-ae0db8a757a3\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.709449 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/537d2855-a36c-4e32-bddc-ae0db8a757a3-kube-api-access-z67hp" (OuterVolumeSpecName: "kube-api-access-z67hp") pod "537d2855-a36c-4e32-bddc-ae0db8a757a3" (UID: "537d2855-a36c-4e32-bddc-ae0db8a757a3"). InnerVolumeSpecName "kube-api-access-z67hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.708447 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-config\") pod \"537d2855-a36c-4e32-bddc-ae0db8a757a3\" (UID: \"537d2855-a36c-4e32-bddc-ae0db8a757a3\") " Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.710178 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z67hp\" (UniqueName: \"kubernetes.io/projected/537d2855-a36c-4e32-bddc-ae0db8a757a3-kube-api-access-z67hp\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.727466 4765 scope.go:117] "RemoveContainer" containerID="7fa52fa651a1398daa3f6b4d6078daa04812674f11407bf32845ecea773a8986" Jan 21 13:22:16 crc kubenswrapper[4765]: E0121 13:22:16.728654 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa52fa651a1398daa3f6b4d6078daa04812674f11407bf32845ecea773a8986\": container with ID starting with 7fa52fa651a1398daa3f6b4d6078daa04812674f11407bf32845ecea773a8986 not found: ID does not exist" containerID="7fa52fa651a1398daa3f6b4d6078daa04812674f11407bf32845ecea773a8986" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.728707 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa52fa651a1398daa3f6b4d6078daa04812674f11407bf32845ecea773a8986"} err="failed to get container status \"7fa52fa651a1398daa3f6b4d6078daa04812674f11407bf32845ecea773a8986\": rpc error: code = NotFound desc = could not find container \"7fa52fa651a1398daa3f6b4d6078daa04812674f11407bf32845ecea773a8986\": container with ID starting with 7fa52fa651a1398daa3f6b4d6078daa04812674f11407bf32845ecea773a8986 not found: ID does not exist" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.728733 4765 scope.go:117] "RemoveContainer" containerID="68207a23a16c8bfe6de4dd01efa4972055e83f8530670490d5198d8bcdba8cb3" Jan 21 13:22:16 crc kubenswrapper[4765]: E0121 13:22:16.729051 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68207a23a16c8bfe6de4dd01efa4972055e83f8530670490d5198d8bcdba8cb3\": container with ID starting with 68207a23a16c8bfe6de4dd01efa4972055e83f8530670490d5198d8bcdba8cb3 not found: ID does not exist" containerID="68207a23a16c8bfe6de4dd01efa4972055e83f8530670490d5198d8bcdba8cb3" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.729110 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68207a23a16c8bfe6de4dd01efa4972055e83f8530670490d5198d8bcdba8cb3"} err="failed to get container status \"68207a23a16c8bfe6de4dd01efa4972055e83f8530670490d5198d8bcdba8cb3\": rpc error: code = NotFound desc = could not find container \"68207a23a16c8bfe6de4dd01efa4972055e83f8530670490d5198d8bcdba8cb3\": container with ID starting with 68207a23a16c8bfe6de4dd01efa4972055e83f8530670490d5198d8bcdba8cb3 not found: ID does not exist" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.752874 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "537d2855-a36c-4e32-bddc-ae0db8a757a3" (UID: "537d2855-a36c-4e32-bddc-ae0db8a757a3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.753117 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-config" (OuterVolumeSpecName: "config") pod "537d2855-a36c-4e32-bddc-ae0db8a757a3" (UID: "537d2855-a36c-4e32-bddc-ae0db8a757a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.760318 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "537d2855-a36c-4e32-bddc-ae0db8a757a3" (UID: "537d2855-a36c-4e32-bddc-ae0db8a757a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.761058 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "537d2855-a36c-4e32-bddc-ae0db8a757a3" (UID: "537d2855-a36c-4e32-bddc-ae0db8a757a3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.764529 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "537d2855-a36c-4e32-bddc-ae0db8a757a3" (UID: "537d2855-a36c-4e32-bddc-ae0db8a757a3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.811769 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.811807 4765 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.811817 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.811826 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.811834 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/537d2855-a36c-4e32-bddc-ae0db8a757a3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:16 crc kubenswrapper[4765]: I0121 13:22:16.980171 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-w9btc"] Jan 21 13:22:16 crc kubenswrapper[4765]: W0121 13:22:16.981991 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod121a128d_b52a_4cb6_a62c_34380823877c.slice/crio-93cc0a8577571c0f8fd2e4fcc0b67cf6a0dde4bfaf2d5d89422bda22d77efd61 WatchSource:0}: Error finding container 93cc0a8577571c0f8fd2e4fcc0b67cf6a0dde4bfaf2d5d89422bda22d77efd61: Status 404 returned error can't find the container with id 93cc0a8577571c0f8fd2e4fcc0b67cf6a0dde4bfaf2d5d89422bda22d77efd61 Jan 21 13:22:17 crc kubenswrapper[4765]: I0121 13:22:17.454518 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 13:22:17 crc kubenswrapper[4765]: I0121 13:22:17.614562 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-vlb25" Jan 21 13:22:17 crc kubenswrapper[4765]: I0121 13:22:17.616046 4765 generic.go:334] "Generic (PLEG): container finished" podID="121a128d-b52a-4cb6-a62c-34380823877c" containerID="a2159a93e0ff58a5d38b86b179e0f07cf7b50f2e5912dcb6a46bc3cd021448e1" exitCode=0 Jan 21 13:22:17 crc kubenswrapper[4765]: I0121 13:22:17.630164 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" event={"ID":"121a128d-b52a-4cb6-a62c-34380823877c","Type":"ContainerDied","Data":"a2159a93e0ff58a5d38b86b179e0f07cf7b50f2e5912dcb6a46bc3cd021448e1"} Jan 21 13:22:17 crc kubenswrapper[4765]: I0121 13:22:17.630202 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" event={"ID":"121a128d-b52a-4cb6-a62c-34380823877c","Type":"ContainerStarted","Data":"93cc0a8577571c0f8fd2e4fcc0b67cf6a0dde4bfaf2d5d89422bda22d77efd61"} Jan 21 13:22:17 crc kubenswrapper[4765]: I0121 13:22:17.820191 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-vlb25"] Jan 21 13:22:17 crc kubenswrapper[4765]: I0121 13:22:17.843579 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-vlb25"] Jan 21 13:22:17 crc kubenswrapper[4765]: I0121 13:22:17.920432 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.017818 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7kb2r"] Jan 21 13:22:18 crc kubenswrapper[4765]: E0121 13:22:18.018178 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="537d2855-a36c-4e32-bddc-ae0db8a757a3" containerName="dnsmasq-dns" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.018199 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="537d2855-a36c-4e32-bddc-ae0db8a757a3" containerName="dnsmasq-dns" Jan 21 13:22:18 crc kubenswrapper[4765]: E0121 13:22:18.018242 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="537d2855-a36c-4e32-bddc-ae0db8a757a3" containerName="init" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.018249 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="537d2855-a36c-4e32-bddc-ae0db8a757a3" containerName="init" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.018412 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="537d2855-a36c-4e32-bddc-ae0db8a757a3" containerName="dnsmasq-dns" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.019144 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7kb2r" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.040945 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7kb2r"] Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.123270 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a27a-account-create-update-9g4bh"] Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.124372 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a27a-account-create-update-9g4bh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.129180 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a27a-account-create-update-9g4bh"] Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.130032 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.143770 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tstbx\" (UniqueName: \"kubernetes.io/projected/ed971171-a23f-4ef1-9eec-28d47864b08f-kube-api-access-tstbx\") pod \"cinder-a27a-account-create-update-9g4bh\" (UID: \"ed971171-a23f-4ef1-9eec-28d47864b08f\") " pod="openstack/cinder-a27a-account-create-update-9g4bh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.143824 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66ba0af3-6159-4e72-ab1d-f32955d76bfa-operator-scripts\") pod \"cinder-db-create-7kb2r\" (UID: \"66ba0af3-6159-4e72-ab1d-f32955d76bfa\") " pod="openstack/cinder-db-create-7kb2r" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.143859 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed971171-a23f-4ef1-9eec-28d47864b08f-operator-scripts\") pod \"cinder-a27a-account-create-update-9g4bh\" (UID: \"ed971171-a23f-4ef1-9eec-28d47864b08f\") " pod="openstack/cinder-a27a-account-create-update-9g4bh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.143930 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpmgd\" (UniqueName: \"kubernetes.io/projected/66ba0af3-6159-4e72-ab1d-f32955d76bfa-kube-api-access-qpmgd\") pod \"cinder-db-create-7kb2r\" (UID: \"66ba0af3-6159-4e72-ab1d-f32955d76bfa\") " pod="openstack/cinder-db-create-7kb2r" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.165268 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-b6tzk"] Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.166634 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-b6tzk" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.217324 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-b6tzk"] Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.245223 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t27cz\" (UniqueName: \"kubernetes.io/projected/0107c27f-cb84-474c-8146-6fa6e03e0a8f-kube-api-access-t27cz\") pod \"barbican-db-create-b6tzk\" (UID: \"0107c27f-cb84-474c-8146-6fa6e03e0a8f\") " pod="openstack/barbican-db-create-b6tzk" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.245275 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpmgd\" (UniqueName: \"kubernetes.io/projected/66ba0af3-6159-4e72-ab1d-f32955d76bfa-kube-api-access-qpmgd\") pod \"cinder-db-create-7kb2r\" (UID: \"66ba0af3-6159-4e72-ab1d-f32955d76bfa\") " pod="openstack/cinder-db-create-7kb2r" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.245332 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0107c27f-cb84-474c-8146-6fa6e03e0a8f-operator-scripts\") pod \"barbican-db-create-b6tzk\" (UID: \"0107c27f-cb84-474c-8146-6fa6e03e0a8f\") " pod="openstack/barbican-db-create-b6tzk" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.245395 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tstbx\" (UniqueName: \"kubernetes.io/projected/ed971171-a23f-4ef1-9eec-28d47864b08f-kube-api-access-tstbx\") pod \"cinder-a27a-account-create-update-9g4bh\" (UID: \"ed971171-a23f-4ef1-9eec-28d47864b08f\") " pod="openstack/cinder-a27a-account-create-update-9g4bh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.245413 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66ba0af3-6159-4e72-ab1d-f32955d76bfa-operator-scripts\") pod \"cinder-db-create-7kb2r\" (UID: \"66ba0af3-6159-4e72-ab1d-f32955d76bfa\") " pod="openstack/cinder-db-create-7kb2r" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.245436 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed971171-a23f-4ef1-9eec-28d47864b08f-operator-scripts\") pod \"cinder-a27a-account-create-update-9g4bh\" (UID: \"ed971171-a23f-4ef1-9eec-28d47864b08f\") " pod="openstack/cinder-a27a-account-create-update-9g4bh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.246136 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed971171-a23f-4ef1-9eec-28d47864b08f-operator-scripts\") pod \"cinder-a27a-account-create-update-9g4bh\" (UID: \"ed971171-a23f-4ef1-9eec-28d47864b08f\") " pod="openstack/cinder-a27a-account-create-update-9g4bh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.246514 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66ba0af3-6159-4e72-ab1d-f32955d76bfa-operator-scripts\") pod \"cinder-db-create-7kb2r\" (UID: \"66ba0af3-6159-4e72-ab1d-f32955d76bfa\") " pod="openstack/cinder-db-create-7kb2r" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.268131 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpmgd\" (UniqueName: \"kubernetes.io/projected/66ba0af3-6159-4e72-ab1d-f32955d76bfa-kube-api-access-qpmgd\") pod \"cinder-db-create-7kb2r\" (UID: \"66ba0af3-6159-4e72-ab1d-f32955d76bfa\") " pod="openstack/cinder-db-create-7kb2r" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.296830 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tstbx\" (UniqueName: \"kubernetes.io/projected/ed971171-a23f-4ef1-9eec-28d47864b08f-kube-api-access-tstbx\") pod \"cinder-a27a-account-create-update-9g4bh\" (UID: \"ed971171-a23f-4ef1-9eec-28d47864b08f\") " pod="openstack/cinder-a27a-account-create-update-9g4bh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.338444 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7kb2r" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.346411 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0107c27f-cb84-474c-8146-6fa6e03e0a8f-operator-scripts\") pod \"barbican-db-create-b6tzk\" (UID: \"0107c27f-cb84-474c-8146-6fa6e03e0a8f\") " pod="openstack/barbican-db-create-b6tzk" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.346535 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t27cz\" (UniqueName: \"kubernetes.io/projected/0107c27f-cb84-474c-8146-6fa6e03e0a8f-kube-api-access-t27cz\") pod \"barbican-db-create-b6tzk\" (UID: \"0107c27f-cb84-474c-8146-6fa6e03e0a8f\") " pod="openstack/barbican-db-create-b6tzk" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.347306 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0107c27f-cb84-474c-8146-6fa6e03e0a8f-operator-scripts\") pod \"barbican-db-create-b6tzk\" (UID: \"0107c27f-cb84-474c-8146-6fa6e03e0a8f\") " pod="openstack/barbican-db-create-b6tzk" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.448932 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a27a-account-create-update-9g4bh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.465852 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t27cz\" (UniqueName: \"kubernetes.io/projected/0107c27f-cb84-474c-8146-6fa6e03e0a8f-kube-api-access-t27cz\") pod \"barbican-db-create-b6tzk\" (UID: \"0107c27f-cb84-474c-8146-6fa6e03e0a8f\") " pod="openstack/barbican-db-create-b6tzk" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.477169 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d337-account-create-update-t6njh"] Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.491944 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d337-account-create-update-t6njh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.499585 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.502702 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-b6tzk" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.536614 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d337-account-create-update-t6njh"] Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.559096 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-nvsqn"] Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.560730 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nvsqn" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.563802 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rvs6\" (UniqueName: \"kubernetes.io/projected/25d4e1e5-919d-44d0-9c5d-e238325d9c00-kube-api-access-5rvs6\") pod \"barbican-d337-account-create-update-t6njh\" (UID: \"25d4e1e5-919d-44d0-9c5d-e238325d9c00\") " pod="openstack/barbican-d337-account-create-update-t6njh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.563852 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdmch\" (UniqueName: \"kubernetes.io/projected/f76ba990-ea55-4459-8486-e413e80ba089-kube-api-access-cdmch\") pod \"neutron-db-create-nvsqn\" (UID: \"f76ba990-ea55-4459-8486-e413e80ba089\") " pod="openstack/neutron-db-create-nvsqn" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.563887 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f76ba990-ea55-4459-8486-e413e80ba089-operator-scripts\") pod \"neutron-db-create-nvsqn\" (UID: \"f76ba990-ea55-4459-8486-e413e80ba089\") " pod="openstack/neutron-db-create-nvsqn" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.563984 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25d4e1e5-919d-44d0-9c5d-e238325d9c00-operator-scripts\") pod \"barbican-d337-account-create-update-t6njh\" (UID: \"25d4e1e5-919d-44d0-9c5d-e238325d9c00\") " pod="openstack/barbican-d337-account-create-update-t6njh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.607347 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nvsqn"] Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.648159 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" event={"ID":"121a128d-b52a-4cb6-a62c-34380823877c","Type":"ContainerStarted","Data":"cc03b92e62ccb8d90084d7be2f638ce4c082aba4a211f71aec3bbfa9509605c1"} Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.648945 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.661983 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-tc4tf"] Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.663040 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tc4tf" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.665605 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25d4e1e5-919d-44d0-9c5d-e238325d9c00-operator-scripts\") pod \"barbican-d337-account-create-update-t6njh\" (UID: \"25d4e1e5-919d-44d0-9c5d-e238325d9c00\") " pod="openstack/barbican-d337-account-create-update-t6njh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.665670 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sd5r\" (UniqueName: \"kubernetes.io/projected/4cd0d3a4-9cce-49d9-9497-3398221354b0-kube-api-access-4sd5r\") pod \"keystone-db-sync-tc4tf\" (UID: \"4cd0d3a4-9cce-49d9-9497-3398221354b0\") " pod="openstack/keystone-db-sync-tc4tf" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.665698 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd0d3a4-9cce-49d9-9497-3398221354b0-combined-ca-bundle\") pod \"keystone-db-sync-tc4tf\" (UID: \"4cd0d3a4-9cce-49d9-9497-3398221354b0\") " pod="openstack/keystone-db-sync-tc4tf" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.665750 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rvs6\" (UniqueName: \"kubernetes.io/projected/25d4e1e5-919d-44d0-9c5d-e238325d9c00-kube-api-access-5rvs6\") pod \"barbican-d337-account-create-update-t6njh\" (UID: \"25d4e1e5-919d-44d0-9c5d-e238325d9c00\") " pod="openstack/barbican-d337-account-create-update-t6njh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.665798 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdmch\" (UniqueName: \"kubernetes.io/projected/f76ba990-ea55-4459-8486-e413e80ba089-kube-api-access-cdmch\") pod \"neutron-db-create-nvsqn\" (UID: \"f76ba990-ea55-4459-8486-e413e80ba089\") " pod="openstack/neutron-db-create-nvsqn" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.665826 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f76ba990-ea55-4459-8486-e413e80ba089-operator-scripts\") pod \"neutron-db-create-nvsqn\" (UID: \"f76ba990-ea55-4459-8486-e413e80ba089\") " pod="openstack/neutron-db-create-nvsqn" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.665846 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd0d3a4-9cce-49d9-9497-3398221354b0-config-data\") pod \"keystone-db-sync-tc4tf\" (UID: \"4cd0d3a4-9cce-49d9-9497-3398221354b0\") " pod="openstack/keystone-db-sync-tc4tf" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.666654 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25d4e1e5-919d-44d0-9c5d-e238325d9c00-operator-scripts\") pod \"barbican-d337-account-create-update-t6njh\" (UID: \"25d4e1e5-919d-44d0-9c5d-e238325d9c00\") " pod="openstack/barbican-d337-account-create-update-t6njh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.668752 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f76ba990-ea55-4459-8486-e413e80ba089-operator-scripts\") pod \"neutron-db-create-nvsqn\" (UID: \"f76ba990-ea55-4459-8486-e413e80ba089\") " pod="openstack/neutron-db-create-nvsqn" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.675080 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.675429 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.675374 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.682663 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w98g4" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.713152 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rvs6\" (UniqueName: \"kubernetes.io/projected/25d4e1e5-919d-44d0-9c5d-e238325d9c00-kube-api-access-5rvs6\") pod \"barbican-d337-account-create-update-t6njh\" (UID: \"25d4e1e5-919d-44d0-9c5d-e238325d9c00\") " pod="openstack/barbican-d337-account-create-update-t6njh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.740387 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" podStartSLOduration=2.740369809 podStartE2EDuration="2.740369809s" podCreationTimestamp="2026-01-21 13:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:22:18.691815673 +0000 UTC m=+1199.709541495" watchObservedRunningTime="2026-01-21 13:22:18.740369809 +0000 UTC m=+1199.758095621" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.744700 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tc4tf"] Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.762839 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdmch\" (UniqueName: \"kubernetes.io/projected/f76ba990-ea55-4459-8486-e413e80ba089-kube-api-access-cdmch\") pod \"neutron-db-create-nvsqn\" (UID: \"f76ba990-ea55-4459-8486-e413e80ba089\") " pod="openstack/neutron-db-create-nvsqn" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.766724 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd0d3a4-9cce-49d9-9497-3398221354b0-config-data\") pod \"keystone-db-sync-tc4tf\" (UID: \"4cd0d3a4-9cce-49d9-9497-3398221354b0\") " pod="openstack/keystone-db-sync-tc4tf" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.766831 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sd5r\" (UniqueName: \"kubernetes.io/projected/4cd0d3a4-9cce-49d9-9497-3398221354b0-kube-api-access-4sd5r\") pod \"keystone-db-sync-tc4tf\" (UID: \"4cd0d3a4-9cce-49d9-9497-3398221354b0\") " pod="openstack/keystone-db-sync-tc4tf" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.766851 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd0d3a4-9cce-49d9-9497-3398221354b0-combined-ca-bundle\") pod \"keystone-db-sync-tc4tf\" (UID: \"4cd0d3a4-9cce-49d9-9497-3398221354b0\") " pod="openstack/keystone-db-sync-tc4tf" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.780427 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd0d3a4-9cce-49d9-9497-3398221354b0-config-data\") pod \"keystone-db-sync-tc4tf\" (UID: \"4cd0d3a4-9cce-49d9-9497-3398221354b0\") " pod="openstack/keystone-db-sync-tc4tf" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.781236 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd0d3a4-9cce-49d9-9497-3398221354b0-combined-ca-bundle\") pod \"keystone-db-sync-tc4tf\" (UID: \"4cd0d3a4-9cce-49d9-9497-3398221354b0\") " pod="openstack/keystone-db-sync-tc4tf" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.786277 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9f6c-account-create-update-g645s"] Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.787807 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9f6c-account-create-update-g645s" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.797316 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.810272 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9f6c-account-create-update-g645s"] Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.838629 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d337-account-create-update-t6njh" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.865778 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sd5r\" (UniqueName: \"kubernetes.io/projected/4cd0d3a4-9cce-49d9-9497-3398221354b0-kube-api-access-4sd5r\") pod \"keystone-db-sync-tc4tf\" (UID: \"4cd0d3a4-9cce-49d9-9497-3398221354b0\") " pod="openstack/keystone-db-sync-tc4tf" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.897236 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nvsqn" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.974150 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fae8415-8196-4888-90aa-8f40261530e4-operator-scripts\") pod \"neutron-9f6c-account-create-update-g645s\" (UID: \"8fae8415-8196-4888-90aa-8f40261530e4\") " pod="openstack/neutron-9f6c-account-create-update-g645s" Jan 21 13:22:18 crc kubenswrapper[4765]: I0121 13:22:18.974237 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9j6d\" (UniqueName: \"kubernetes.io/projected/8fae8415-8196-4888-90aa-8f40261530e4-kube-api-access-j9j6d\") pod \"neutron-9f6c-account-create-update-g645s\" (UID: \"8fae8415-8196-4888-90aa-8f40261530e4\") " pod="openstack/neutron-9f6c-account-create-update-g645s" Jan 21 13:22:19 crc kubenswrapper[4765]: I0121 13:22:19.022581 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tc4tf" Jan 21 13:22:19 crc kubenswrapper[4765]: I0121 13:22:19.076063 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fae8415-8196-4888-90aa-8f40261530e4-operator-scripts\") pod \"neutron-9f6c-account-create-update-g645s\" (UID: \"8fae8415-8196-4888-90aa-8f40261530e4\") " pod="openstack/neutron-9f6c-account-create-update-g645s" Jan 21 13:22:19 crc kubenswrapper[4765]: I0121 13:22:19.076360 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9j6d\" (UniqueName: \"kubernetes.io/projected/8fae8415-8196-4888-90aa-8f40261530e4-kube-api-access-j9j6d\") pod \"neutron-9f6c-account-create-update-g645s\" (UID: \"8fae8415-8196-4888-90aa-8f40261530e4\") " pod="openstack/neutron-9f6c-account-create-update-g645s" Jan 21 13:22:19 crc kubenswrapper[4765]: I0121 13:22:19.077043 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fae8415-8196-4888-90aa-8f40261530e4-operator-scripts\") pod \"neutron-9f6c-account-create-update-g645s\" (UID: \"8fae8415-8196-4888-90aa-8f40261530e4\") " pod="openstack/neutron-9f6c-account-create-update-g645s" Jan 21 13:22:19 crc kubenswrapper[4765]: I0121 13:22:19.109050 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9j6d\" (UniqueName: \"kubernetes.io/projected/8fae8415-8196-4888-90aa-8f40261530e4-kube-api-access-j9j6d\") pod \"neutron-9f6c-account-create-update-g645s\" (UID: \"8fae8415-8196-4888-90aa-8f40261530e4\") " pod="openstack/neutron-9f6c-account-create-update-g645s" Jan 21 13:22:19 crc kubenswrapper[4765]: I0121 13:22:19.150112 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9f6c-account-create-update-g645s" Jan 21 13:22:19 crc kubenswrapper[4765]: I0121 13:22:19.193173 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7kb2r"] Jan 21 13:22:19 crc kubenswrapper[4765]: I0121 13:22:19.624883 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="537d2855-a36c-4e32-bddc-ae0db8a757a3" path="/var/lib/kubelet/pods/537d2855-a36c-4e32-bddc-ae0db8a757a3/volumes" Jan 21 13:22:19 crc kubenswrapper[4765]: I0121 13:22:19.669493 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7kb2r" event={"ID":"66ba0af3-6159-4e72-ab1d-f32955d76bfa","Type":"ContainerStarted","Data":"5b2cb13a0e750b24b5f68bf71372250a3f9ccfb78f91300751f9f2427487d214"} Jan 21 13:22:19 crc kubenswrapper[4765]: I0121 13:22:19.746036 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-b6tzk"] Jan 21 13:22:19 crc kubenswrapper[4765]: I0121 13:22:19.973186 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nvsqn"] Jan 21 13:22:20 crc kubenswrapper[4765]: W0121 13:22:20.020532 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded971171_a23f_4ef1_9eec_28d47864b08f.slice/crio-7ab2facaaf256b3ddcea756ed081bc5386df38cc466b4dc79059ad8f88b5070e WatchSource:0}: Error finding container 7ab2facaaf256b3ddcea756ed081bc5386df38cc466b4dc79059ad8f88b5070e: Status 404 returned error can't find the container with id 7ab2facaaf256b3ddcea756ed081bc5386df38cc466b4dc79059ad8f88b5070e Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.027950 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d337-account-create-update-t6njh"] Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.049610 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.055958 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.059865 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a27a-account-create-update-9g4bh"] Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.072968 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tc4tf"] Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.136010 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9f6c-account-create-update-g645s"] Jan 21 13:22:20 crc kubenswrapper[4765]: W0121 13:22:20.145433 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fae8415_8196_4888_90aa_8f40261530e4.slice/crio-6eaeb7667f3439e0d3b6bf647f53a0ce48b8fed7469a2a5038fa8e0c13a1d987 WatchSource:0}: Error finding container 6eaeb7667f3439e0d3b6bf647f53a0ce48b8fed7469a2a5038fa8e0c13a1d987: Status 404 returned error can't find the container with id 6eaeb7667f3439e0d3b6bf647f53a0ce48b8fed7469a2a5038fa8e0c13a1d987 Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.153024 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.682330 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a27a-account-create-update-9g4bh" event={"ID":"ed971171-a23f-4ef1-9eec-28d47864b08f","Type":"ContainerStarted","Data":"a399100f6c3d42ed941735929f76b7de5dfd450fd457fab3c1d245509bdaa616"} Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.682376 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a27a-account-create-update-9g4bh" event={"ID":"ed971171-a23f-4ef1-9eec-28d47864b08f","Type":"ContainerStarted","Data":"7ab2facaaf256b3ddcea756ed081bc5386df38cc466b4dc79059ad8f88b5070e"} Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.687390 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-b6tzk" event={"ID":"0107c27f-cb84-474c-8146-6fa6e03e0a8f","Type":"ContainerStarted","Data":"469412805a800458a07d2ecbccd810dd5fb80c2c4b175b0e1b61b6dd7653dc2b"} Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.687435 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-b6tzk" event={"ID":"0107c27f-cb84-474c-8146-6fa6e03e0a8f","Type":"ContainerStarted","Data":"6d79341ce1617a4995b4049068ef95c91ba0973438dcc42a3b06f2f9c7671e27"} Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.690392 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tc4tf" event={"ID":"4cd0d3a4-9cce-49d9-9497-3398221354b0","Type":"ContainerStarted","Data":"be69b96f555291c2e125a88bf70ec253653c9ea0b78f85fbd1e5d1d455ce3ae8"} Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.693168 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9f6c-account-create-update-g645s" event={"ID":"8fae8415-8196-4888-90aa-8f40261530e4","Type":"ContainerStarted","Data":"948ad17a8afdf2aa472006584031b56685d245f349f975028be9b3877f15f741"} Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.693193 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9f6c-account-create-update-g645s" event={"ID":"8fae8415-8196-4888-90aa-8f40261530e4","Type":"ContainerStarted","Data":"6eaeb7667f3439e0d3b6bf647f53a0ce48b8fed7469a2a5038fa8e0c13a1d987"} Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.698305 4765 generic.go:334] "Generic (PLEG): container finished" podID="66ba0af3-6159-4e72-ab1d-f32955d76bfa" containerID="d53e75ee06d2c86845426580212e68bdccaff55b84b5f8d258b6fe54e7debb03" exitCode=0 Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.698443 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7kb2r" event={"ID":"66ba0af3-6159-4e72-ab1d-f32955d76bfa","Type":"ContainerDied","Data":"d53e75ee06d2c86845426580212e68bdccaff55b84b5f8d258b6fe54e7debb03"} Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.705390 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nvsqn" event={"ID":"f76ba990-ea55-4459-8486-e413e80ba089","Type":"ContainerStarted","Data":"efc2d686337a7743a9ab89363ac72d84fed522dc68b0602424269139851017fd"} Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.705447 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nvsqn" event={"ID":"f76ba990-ea55-4459-8486-e413e80ba089","Type":"ContainerStarted","Data":"b3e813b3df1eb281b12b3552957a90090e99b264d5ab296c820d1951cb04dae5"} Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.707443 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-a27a-account-create-update-9g4bh" podStartSLOduration=2.707427484 podStartE2EDuration="2.707427484s" podCreationTimestamp="2026-01-21 13:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:22:20.70456206 +0000 UTC m=+1201.722287882" watchObservedRunningTime="2026-01-21 13:22:20.707427484 +0000 UTC m=+1201.725153306" Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.711069 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d337-account-create-update-t6njh" event={"ID":"25d4e1e5-919d-44d0-9c5d-e238325d9c00","Type":"ContainerStarted","Data":"06935b48d4bf43cb03e5aed3cfe863ead12d20d098c290ae9f2f4f46a891bd1a"} Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.711184 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d337-account-create-update-t6njh" event={"ID":"25d4e1e5-919d-44d0-9c5d-e238325d9c00","Type":"ContainerStarted","Data":"04fe2816ade1f322c8cb27c6b880fb0b2d53a0ce6f0fae28eb3ebdde040bfcc6"} Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.732151 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9f6c-account-create-update-g645s" podStartSLOduration=2.732126129 podStartE2EDuration="2.732126129s" podCreationTimestamp="2026-01-21 13:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:22:20.721223539 +0000 UTC m=+1201.738949381" watchObservedRunningTime="2026-01-21 13:22:20.732126129 +0000 UTC m=+1201.749851951" Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.771134 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-b6tzk" podStartSLOduration=2.771112744 podStartE2EDuration="2.771112744s" podCreationTimestamp="2026-01-21 13:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:22:20.75700005 +0000 UTC m=+1201.774725872" watchObservedRunningTime="2026-01-21 13:22:20.771112744 +0000 UTC m=+1201.788838566" Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.784054 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-nvsqn" podStartSLOduration=2.784031724 podStartE2EDuration="2.784031724s" podCreationTimestamp="2026-01-21 13:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:22:20.781888281 +0000 UTC m=+1201.799614093" watchObservedRunningTime="2026-01-21 13:22:20.784031724 +0000 UTC m=+1201.801757546" Jan 21 13:22:20 crc kubenswrapper[4765]: I0121 13:22:20.805827 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-d337-account-create-update-t6njh" podStartSLOduration=2.805805543 podStartE2EDuration="2.805805543s" podCreationTimestamp="2026-01-21 13:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:22:20.799733865 +0000 UTC m=+1201.817459687" watchObservedRunningTime="2026-01-21 13:22:20.805805543 +0000 UTC m=+1201.823531365" Jan 21 13:22:21 crc kubenswrapper[4765]: I0121 13:22:21.719970 4765 generic.go:334] "Generic (PLEG): container finished" podID="8fae8415-8196-4888-90aa-8f40261530e4" containerID="948ad17a8afdf2aa472006584031b56685d245f349f975028be9b3877f15f741" exitCode=0 Jan 21 13:22:21 crc kubenswrapper[4765]: I0121 13:22:21.720037 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9f6c-account-create-update-g645s" event={"ID":"8fae8415-8196-4888-90aa-8f40261530e4","Type":"ContainerDied","Data":"948ad17a8afdf2aa472006584031b56685d245f349f975028be9b3877f15f741"} Jan 21 13:22:21 crc kubenswrapper[4765]: I0121 13:22:21.723827 4765 generic.go:334] "Generic (PLEG): container finished" podID="f76ba990-ea55-4459-8486-e413e80ba089" containerID="efc2d686337a7743a9ab89363ac72d84fed522dc68b0602424269139851017fd" exitCode=0 Jan 21 13:22:21 crc kubenswrapper[4765]: I0121 13:22:21.723886 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nvsqn" event={"ID":"f76ba990-ea55-4459-8486-e413e80ba089","Type":"ContainerDied","Data":"efc2d686337a7743a9ab89363ac72d84fed522dc68b0602424269139851017fd"} Jan 21 13:22:21 crc kubenswrapper[4765]: I0121 13:22:21.725616 4765 generic.go:334] "Generic (PLEG): container finished" podID="25d4e1e5-919d-44d0-9c5d-e238325d9c00" containerID="06935b48d4bf43cb03e5aed3cfe863ead12d20d098c290ae9f2f4f46a891bd1a" exitCode=0 Jan 21 13:22:21 crc kubenswrapper[4765]: I0121 13:22:21.725688 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d337-account-create-update-t6njh" event={"ID":"25d4e1e5-919d-44d0-9c5d-e238325d9c00","Type":"ContainerDied","Data":"06935b48d4bf43cb03e5aed3cfe863ead12d20d098c290ae9f2f4f46a891bd1a"} Jan 21 13:22:21 crc kubenswrapper[4765]: I0121 13:22:21.727696 4765 generic.go:334] "Generic (PLEG): container finished" podID="ed971171-a23f-4ef1-9eec-28d47864b08f" containerID="a399100f6c3d42ed941735929f76b7de5dfd450fd457fab3c1d245509bdaa616" exitCode=0 Jan 21 13:22:21 crc kubenswrapper[4765]: I0121 13:22:21.727750 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a27a-account-create-update-9g4bh" event={"ID":"ed971171-a23f-4ef1-9eec-28d47864b08f","Type":"ContainerDied","Data":"a399100f6c3d42ed941735929f76b7de5dfd450fd457fab3c1d245509bdaa616"} Jan 21 13:22:21 crc kubenswrapper[4765]: I0121 13:22:21.729843 4765 generic.go:334] "Generic (PLEG): container finished" podID="0107c27f-cb84-474c-8146-6fa6e03e0a8f" containerID="469412805a800458a07d2ecbccd810dd5fb80c2c4b175b0e1b61b6dd7653dc2b" exitCode=0 Jan 21 13:22:21 crc kubenswrapper[4765]: I0121 13:22:21.730081 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-b6tzk" event={"ID":"0107c27f-cb84-474c-8146-6fa6e03e0a8f","Type":"ContainerDied","Data":"469412805a800458a07d2ecbccd810dd5fb80c2c4b175b0e1b61b6dd7653dc2b"} Jan 21 13:22:23 crc kubenswrapper[4765]: I0121 13:22:22.182202 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7kb2r" Jan 21 13:22:23 crc kubenswrapper[4765]: I0121 13:22:22.291919 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpmgd\" (UniqueName: \"kubernetes.io/projected/66ba0af3-6159-4e72-ab1d-f32955d76bfa-kube-api-access-qpmgd\") pod \"66ba0af3-6159-4e72-ab1d-f32955d76bfa\" (UID: \"66ba0af3-6159-4e72-ab1d-f32955d76bfa\") " Jan 21 13:22:23 crc kubenswrapper[4765]: I0121 13:22:22.293706 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66ba0af3-6159-4e72-ab1d-f32955d76bfa-operator-scripts\") pod \"66ba0af3-6159-4e72-ab1d-f32955d76bfa\" (UID: \"66ba0af3-6159-4e72-ab1d-f32955d76bfa\") " Jan 21 13:22:23 crc kubenswrapper[4765]: I0121 13:22:22.294384 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66ba0af3-6159-4e72-ab1d-f32955d76bfa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66ba0af3-6159-4e72-ab1d-f32955d76bfa" (UID: "66ba0af3-6159-4e72-ab1d-f32955d76bfa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:23 crc kubenswrapper[4765]: I0121 13:22:22.301385 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ba0af3-6159-4e72-ab1d-f32955d76bfa-kube-api-access-qpmgd" (OuterVolumeSpecName: "kube-api-access-qpmgd") pod "66ba0af3-6159-4e72-ab1d-f32955d76bfa" (UID: "66ba0af3-6159-4e72-ab1d-f32955d76bfa"). InnerVolumeSpecName "kube-api-access-qpmgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:23 crc kubenswrapper[4765]: I0121 13:22:22.395552 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpmgd\" (UniqueName: \"kubernetes.io/projected/66ba0af3-6159-4e72-ab1d-f32955d76bfa-kube-api-access-qpmgd\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:23 crc kubenswrapper[4765]: I0121 13:22:22.395591 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66ba0af3-6159-4e72-ab1d-f32955d76bfa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:23 crc kubenswrapper[4765]: I0121 13:22:22.741682 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7kb2r" event={"ID":"66ba0af3-6159-4e72-ab1d-f32955d76bfa","Type":"ContainerDied","Data":"5b2cb13a0e750b24b5f68bf71372250a3f9ccfb78f91300751f9f2427487d214"} Jan 21 13:22:23 crc kubenswrapper[4765]: I0121 13:22:22.741726 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b2cb13a0e750b24b5f68bf71372250a3f9ccfb78f91300751f9f2427487d214" Jan 21 13:22:23 crc kubenswrapper[4765]: I0121 13:22:22.741762 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7kb2r" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.204776 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-b6tzk" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.212722 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nvsqn" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.237759 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9f6c-account-create-update-g645s" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.262095 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a27a-account-create-update-9g4bh" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.268658 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d337-account-create-update-t6njh" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.276771 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0107c27f-cb84-474c-8146-6fa6e03e0a8f-operator-scripts\") pod \"0107c27f-cb84-474c-8146-6fa6e03e0a8f\" (UID: \"0107c27f-cb84-474c-8146-6fa6e03e0a8f\") " Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.276988 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t27cz\" (UniqueName: \"kubernetes.io/projected/0107c27f-cb84-474c-8146-6fa6e03e0a8f-kube-api-access-t27cz\") pod \"0107c27f-cb84-474c-8146-6fa6e03e0a8f\" (UID: \"0107c27f-cb84-474c-8146-6fa6e03e0a8f\") " Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.277936 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0107c27f-cb84-474c-8146-6fa6e03e0a8f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0107c27f-cb84-474c-8146-6fa6e03e0a8f" (UID: "0107c27f-cb84-474c-8146-6fa6e03e0a8f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.288958 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0107c27f-cb84-474c-8146-6fa6e03e0a8f-kube-api-access-t27cz" (OuterVolumeSpecName: "kube-api-access-t27cz") pod "0107c27f-cb84-474c-8146-6fa6e03e0a8f" (UID: "0107c27f-cb84-474c-8146-6fa6e03e0a8f"). InnerVolumeSpecName "kube-api-access-t27cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.379087 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f76ba990-ea55-4459-8486-e413e80ba089-operator-scripts\") pod \"f76ba990-ea55-4459-8486-e413e80ba089\" (UID: \"f76ba990-ea55-4459-8486-e413e80ba089\") " Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.379350 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdmch\" (UniqueName: \"kubernetes.io/projected/f76ba990-ea55-4459-8486-e413e80ba089-kube-api-access-cdmch\") pod \"f76ba990-ea55-4459-8486-e413e80ba089\" (UID: \"f76ba990-ea55-4459-8486-e413e80ba089\") " Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.379482 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9j6d\" (UniqueName: \"kubernetes.io/projected/8fae8415-8196-4888-90aa-8f40261530e4-kube-api-access-j9j6d\") pod \"8fae8415-8196-4888-90aa-8f40261530e4\" (UID: \"8fae8415-8196-4888-90aa-8f40261530e4\") " Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.379588 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fae8415-8196-4888-90aa-8f40261530e4-operator-scripts\") pod \"8fae8415-8196-4888-90aa-8f40261530e4\" (UID: \"8fae8415-8196-4888-90aa-8f40261530e4\") " Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.379708 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rvs6\" (UniqueName: \"kubernetes.io/projected/25d4e1e5-919d-44d0-9c5d-e238325d9c00-kube-api-access-5rvs6\") pod \"25d4e1e5-919d-44d0-9c5d-e238325d9c00\" (UID: \"25d4e1e5-919d-44d0-9c5d-e238325d9c00\") " Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.379584 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f76ba990-ea55-4459-8486-e413e80ba089-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f76ba990-ea55-4459-8486-e413e80ba089" (UID: "f76ba990-ea55-4459-8486-e413e80ba089"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.379883 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tstbx\" (UniqueName: \"kubernetes.io/projected/ed971171-a23f-4ef1-9eec-28d47864b08f-kube-api-access-tstbx\") pod \"ed971171-a23f-4ef1-9eec-28d47864b08f\" (UID: \"ed971171-a23f-4ef1-9eec-28d47864b08f\") " Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.379947 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fae8415-8196-4888-90aa-8f40261530e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8fae8415-8196-4888-90aa-8f40261530e4" (UID: "8fae8415-8196-4888-90aa-8f40261530e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.380067 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed971171-a23f-4ef1-9eec-28d47864b08f-operator-scripts\") pod \"ed971171-a23f-4ef1-9eec-28d47864b08f\" (UID: \"ed971171-a23f-4ef1-9eec-28d47864b08f\") " Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.380201 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25d4e1e5-919d-44d0-9c5d-e238325d9c00-operator-scripts\") pod \"25d4e1e5-919d-44d0-9c5d-e238325d9c00\" (UID: \"25d4e1e5-919d-44d0-9c5d-e238325d9c00\") " Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.380461 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed971171-a23f-4ef1-9eec-28d47864b08f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed971171-a23f-4ef1-9eec-28d47864b08f" (UID: "ed971171-a23f-4ef1-9eec-28d47864b08f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.380820 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25d4e1e5-919d-44d0-9c5d-e238325d9c00-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25d4e1e5-919d-44d0-9c5d-e238325d9c00" (UID: "25d4e1e5-919d-44d0-9c5d-e238325d9c00"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.380827 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fae8415-8196-4888-90aa-8f40261530e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.380869 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed971171-a23f-4ef1-9eec-28d47864b08f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.380884 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t27cz\" (UniqueName: \"kubernetes.io/projected/0107c27f-cb84-474c-8146-6fa6e03e0a8f-kube-api-access-t27cz\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.380898 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0107c27f-cb84-474c-8146-6fa6e03e0a8f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.380910 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f76ba990-ea55-4459-8486-e413e80ba089-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.383345 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fae8415-8196-4888-90aa-8f40261530e4-kube-api-access-j9j6d" (OuterVolumeSpecName: "kube-api-access-j9j6d") pod "8fae8415-8196-4888-90aa-8f40261530e4" (UID: "8fae8415-8196-4888-90aa-8f40261530e4"). InnerVolumeSpecName "kube-api-access-j9j6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.384568 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f76ba990-ea55-4459-8486-e413e80ba089-kube-api-access-cdmch" (OuterVolumeSpecName: "kube-api-access-cdmch") pod "f76ba990-ea55-4459-8486-e413e80ba089" (UID: "f76ba990-ea55-4459-8486-e413e80ba089"). InnerVolumeSpecName "kube-api-access-cdmch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.384677 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d4e1e5-919d-44d0-9c5d-e238325d9c00-kube-api-access-5rvs6" (OuterVolumeSpecName: "kube-api-access-5rvs6") pod "25d4e1e5-919d-44d0-9c5d-e238325d9c00" (UID: "25d4e1e5-919d-44d0-9c5d-e238325d9c00"). InnerVolumeSpecName "kube-api-access-5rvs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.385146 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed971171-a23f-4ef1-9eec-28d47864b08f-kube-api-access-tstbx" (OuterVolumeSpecName: "kube-api-access-tstbx") pod "ed971171-a23f-4ef1-9eec-28d47864b08f" (UID: "ed971171-a23f-4ef1-9eec-28d47864b08f"). InnerVolumeSpecName "kube-api-access-tstbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.482987 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdmch\" (UniqueName: \"kubernetes.io/projected/f76ba990-ea55-4459-8486-e413e80ba089-kube-api-access-cdmch\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.483033 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9j6d\" (UniqueName: \"kubernetes.io/projected/8fae8415-8196-4888-90aa-8f40261530e4-kube-api-access-j9j6d\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.483044 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rvs6\" (UniqueName: \"kubernetes.io/projected/25d4e1e5-919d-44d0-9c5d-e238325d9c00-kube-api-access-5rvs6\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.483055 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tstbx\" (UniqueName: \"kubernetes.io/projected/ed971171-a23f-4ef1-9eec-28d47864b08f-kube-api-access-tstbx\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.483067 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25d4e1e5-919d-44d0-9c5d-e238325d9c00-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.490376 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.580498 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-2bt8n"] Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.580735 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" podUID="dbc3ab54-aeb7-4536-a7c9-30078f148ec5" containerName="dnsmasq-dns" containerID="cri-o://723ece5d58ab11eb6af4757ddbaea824afe9040fdf90e2e933af82aebabd1de7" gracePeriod=10 Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.786621 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-b6tzk" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.786612 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-b6tzk" event={"ID":"0107c27f-cb84-474c-8146-6fa6e03e0a8f","Type":"ContainerDied","Data":"6d79341ce1617a4995b4049068ef95c91ba0973438dcc42a3b06f2f9c7671e27"} Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.787770 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d79341ce1617a4995b4049068ef95c91ba0973438dcc42a3b06f2f9c7671e27" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.790510 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9f6c-account-create-update-g645s" event={"ID":"8fae8415-8196-4888-90aa-8f40261530e4","Type":"ContainerDied","Data":"6eaeb7667f3439e0d3b6bf647f53a0ce48b8fed7469a2a5038fa8e0c13a1d987"} Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.790550 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eaeb7667f3439e0d3b6bf647f53a0ce48b8fed7469a2a5038fa8e0c13a1d987" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.790644 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9f6c-account-create-update-g645s" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.792673 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d337-account-create-update-t6njh" event={"ID":"25d4e1e5-919d-44d0-9c5d-e238325d9c00","Type":"ContainerDied","Data":"04fe2816ade1f322c8cb27c6b880fb0b2d53a0ce6f0fae28eb3ebdde040bfcc6"} Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.792935 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04fe2816ade1f322c8cb27c6b880fb0b2d53a0ce6f0fae28eb3ebdde040bfcc6" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.793022 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d337-account-create-update-t6njh" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.796735 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nvsqn" event={"ID":"f76ba990-ea55-4459-8486-e413e80ba089","Type":"ContainerDied","Data":"b3e813b3df1eb281b12b3552957a90090e99b264d5ab296c820d1951cb04dae5"} Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.796778 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3e813b3df1eb281b12b3552957a90090e99b264d5ab296c820d1951cb04dae5" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.796853 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nvsqn" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.803626 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a27a-account-create-update-9g4bh" event={"ID":"ed971171-a23f-4ef1-9eec-28d47864b08f","Type":"ContainerDied","Data":"7ab2facaaf256b3ddcea756ed081bc5386df38cc466b4dc79059ad8f88b5070e"} Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.803682 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ab2facaaf256b3ddcea756ed081bc5386df38cc466b4dc79059ad8f88b5070e" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.803769 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a27a-account-create-update-9g4bh" Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.810429 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tc4tf" event={"ID":"4cd0d3a4-9cce-49d9-9497-3398221354b0","Type":"ContainerStarted","Data":"e950c9cad8c1d622fe1bc87d455211fa3cc6a3110be9a529d75a92d672410921"} Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.817025 4765 generic.go:334] "Generic (PLEG): container finished" podID="dbc3ab54-aeb7-4536-a7c9-30078f148ec5" containerID="723ece5d58ab11eb6af4757ddbaea824afe9040fdf90e2e933af82aebabd1de7" exitCode=0 Jan 21 13:22:26 crc kubenswrapper[4765]: I0121 13:22:26.817054 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" event={"ID":"dbc3ab54-aeb7-4536-a7c9-30078f148ec5","Type":"ContainerDied","Data":"723ece5d58ab11eb6af4757ddbaea824afe9040fdf90e2e933af82aebabd1de7"} Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.130273 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.153920 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-tc4tf" podStartSLOduration=3.146740237 podStartE2EDuration="9.153897031s" podCreationTimestamp="2026-01-21 13:22:18 +0000 UTC" firstStartedPulling="2026-01-21 13:22:20.053117736 +0000 UTC m=+1201.070843558" lastFinishedPulling="2026-01-21 13:22:26.06027452 +0000 UTC m=+1207.078000352" observedRunningTime="2026-01-21 13:22:26.834594383 +0000 UTC m=+1207.852320205" watchObservedRunningTime="2026-01-21 13:22:27.153897031 +0000 UTC m=+1208.171622853" Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.202991 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-ovsdbserver-nb\") pod \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.203240 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mqzc\" (UniqueName: \"kubernetes.io/projected/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-kube-api-access-6mqzc\") pod \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.203298 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-config\") pod \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.203372 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-dns-svc\") pod \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.203391 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-ovsdbserver-sb\") pod \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\" (UID: \"dbc3ab54-aeb7-4536-a7c9-30078f148ec5\") " Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.227282 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-kube-api-access-6mqzc" (OuterVolumeSpecName: "kube-api-access-6mqzc") pod "dbc3ab54-aeb7-4536-a7c9-30078f148ec5" (UID: "dbc3ab54-aeb7-4536-a7c9-30078f148ec5"). InnerVolumeSpecName "kube-api-access-6mqzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.276009 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dbc3ab54-aeb7-4536-a7c9-30078f148ec5" (UID: "dbc3ab54-aeb7-4536-a7c9-30078f148ec5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.306653 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mqzc\" (UniqueName: \"kubernetes.io/projected/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-kube-api-access-6mqzc\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.306686 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.336347 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-config" (OuterVolumeSpecName: "config") pod "dbc3ab54-aeb7-4536-a7c9-30078f148ec5" (UID: "dbc3ab54-aeb7-4536-a7c9-30078f148ec5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.336675 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dbc3ab54-aeb7-4536-a7c9-30078f148ec5" (UID: "dbc3ab54-aeb7-4536-a7c9-30078f148ec5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.336975 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dbc3ab54-aeb7-4536-a7c9-30078f148ec5" (UID: "dbc3ab54-aeb7-4536-a7c9-30078f148ec5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.407740 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.407783 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.407794 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dbc3ab54-aeb7-4536-a7c9-30078f148ec5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.827458 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" event={"ID":"dbc3ab54-aeb7-4536-a7c9-30078f148ec5","Type":"ContainerDied","Data":"678fa8aabb8ebb41101f22207e69d37d7f10ddfcad5db166aeeed2355a54e511"} Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.827516 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-2bt8n" Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.827818 4765 scope.go:117] "RemoveContainer" containerID="723ece5d58ab11eb6af4757ddbaea824afe9040fdf90e2e933af82aebabd1de7" Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.853531 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-2bt8n"] Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.853777 4765 scope.go:117] "RemoveContainer" containerID="ad2c682c9eb8a2f35d73a6fa79fd3f1426e419368e0235e5909f3d659d53644c" Jan 21 13:22:27 crc kubenswrapper[4765]: I0121 13:22:27.863820 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-2bt8n"] Jan 21 13:22:29 crc kubenswrapper[4765]: I0121 13:22:29.624757 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbc3ab54-aeb7-4536-a7c9-30078f148ec5" path="/var/lib/kubelet/pods/dbc3ab54-aeb7-4536-a7c9-30078f148ec5/volumes" Jan 21 13:22:30 crc kubenswrapper[4765]: I0121 13:22:30.856349 4765 generic.go:334] "Generic (PLEG): container finished" podID="4cd0d3a4-9cce-49d9-9497-3398221354b0" containerID="e950c9cad8c1d622fe1bc87d455211fa3cc6a3110be9a529d75a92d672410921" exitCode=0 Jan 21 13:22:30 crc kubenswrapper[4765]: I0121 13:22:30.856410 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tc4tf" event={"ID":"4cd0d3a4-9cce-49d9-9497-3398221354b0","Type":"ContainerDied","Data":"e950c9cad8c1d622fe1bc87d455211fa3cc6a3110be9a529d75a92d672410921"} Jan 21 13:22:32 crc kubenswrapper[4765]: I0121 13:22:32.187953 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tc4tf" Jan 21 13:22:32 crc kubenswrapper[4765]: I0121 13:22:32.288964 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd0d3a4-9cce-49d9-9497-3398221354b0-combined-ca-bundle\") pod \"4cd0d3a4-9cce-49d9-9497-3398221354b0\" (UID: \"4cd0d3a4-9cce-49d9-9497-3398221354b0\") " Jan 21 13:22:32 crc kubenswrapper[4765]: I0121 13:22:32.289329 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sd5r\" (UniqueName: \"kubernetes.io/projected/4cd0d3a4-9cce-49d9-9497-3398221354b0-kube-api-access-4sd5r\") pod \"4cd0d3a4-9cce-49d9-9497-3398221354b0\" (UID: \"4cd0d3a4-9cce-49d9-9497-3398221354b0\") " Jan 21 13:22:32 crc kubenswrapper[4765]: I0121 13:22:32.289411 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd0d3a4-9cce-49d9-9497-3398221354b0-config-data\") pod \"4cd0d3a4-9cce-49d9-9497-3398221354b0\" (UID: \"4cd0d3a4-9cce-49d9-9497-3398221354b0\") " Jan 21 13:22:32 crc kubenswrapper[4765]: I0121 13:22:32.301394 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd0d3a4-9cce-49d9-9497-3398221354b0-kube-api-access-4sd5r" (OuterVolumeSpecName: "kube-api-access-4sd5r") pod "4cd0d3a4-9cce-49d9-9497-3398221354b0" (UID: "4cd0d3a4-9cce-49d9-9497-3398221354b0"). InnerVolumeSpecName "kube-api-access-4sd5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:32 crc kubenswrapper[4765]: I0121 13:22:32.315535 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd0d3a4-9cce-49d9-9497-3398221354b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cd0d3a4-9cce-49d9-9497-3398221354b0" (UID: "4cd0d3a4-9cce-49d9-9497-3398221354b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:32 crc kubenswrapper[4765]: I0121 13:22:32.336517 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd0d3a4-9cce-49d9-9497-3398221354b0-config-data" (OuterVolumeSpecName: "config-data") pod "4cd0d3a4-9cce-49d9-9497-3398221354b0" (UID: "4cd0d3a4-9cce-49d9-9497-3398221354b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:32 crc kubenswrapper[4765]: I0121 13:22:32.390969 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cd0d3a4-9cce-49d9-9497-3398221354b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:32 crc kubenswrapper[4765]: I0121 13:22:32.391007 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sd5r\" (UniqueName: \"kubernetes.io/projected/4cd0d3a4-9cce-49d9-9497-3398221354b0-kube-api-access-4sd5r\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:32 crc kubenswrapper[4765]: I0121 13:22:32.391018 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cd0d3a4-9cce-49d9-9497-3398221354b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:32 crc kubenswrapper[4765]: I0121 13:22:32.873712 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tc4tf" event={"ID":"4cd0d3a4-9cce-49d9-9497-3398221354b0","Type":"ContainerDied","Data":"be69b96f555291c2e125a88bf70ec253653c9ea0b78f85fbd1e5d1d455ce3ae8"} Jan 21 13:22:32 crc kubenswrapper[4765]: I0121 13:22:32.873950 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be69b96f555291c2e125a88bf70ec253653c9ea0b78f85fbd1e5d1d455ce3ae8" Jan 21 13:22:32 crc kubenswrapper[4765]: I0121 13:22:32.873816 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tc4tf" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.186940 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-gxcv9"] Jan 21 13:22:33 crc kubenswrapper[4765]: E0121 13:22:33.187367 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0107c27f-cb84-474c-8146-6fa6e03e0a8f" containerName="mariadb-database-create" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187388 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0107c27f-cb84-474c-8146-6fa6e03e0a8f" containerName="mariadb-database-create" Jan 21 13:22:33 crc kubenswrapper[4765]: E0121 13:22:33.187401 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ba0af3-6159-4e72-ab1d-f32955d76bfa" containerName="mariadb-database-create" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187409 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ba0af3-6159-4e72-ab1d-f32955d76bfa" containerName="mariadb-database-create" Jan 21 13:22:33 crc kubenswrapper[4765]: E0121 13:22:33.187417 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fae8415-8196-4888-90aa-8f40261530e4" containerName="mariadb-account-create-update" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187425 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fae8415-8196-4888-90aa-8f40261530e4" containerName="mariadb-account-create-update" Jan 21 13:22:33 crc kubenswrapper[4765]: E0121 13:22:33.187438 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc3ab54-aeb7-4536-a7c9-30078f148ec5" containerName="init" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187444 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc3ab54-aeb7-4536-a7c9-30078f148ec5" containerName="init" Jan 21 13:22:33 crc kubenswrapper[4765]: E0121 13:22:33.187465 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76ba990-ea55-4459-8486-e413e80ba089" containerName="mariadb-database-create" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187471 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76ba990-ea55-4459-8486-e413e80ba089" containerName="mariadb-database-create" Jan 21 13:22:33 crc kubenswrapper[4765]: E0121 13:22:33.187488 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc3ab54-aeb7-4536-a7c9-30078f148ec5" containerName="dnsmasq-dns" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187496 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc3ab54-aeb7-4536-a7c9-30078f148ec5" containerName="dnsmasq-dns" Jan 21 13:22:33 crc kubenswrapper[4765]: E0121 13:22:33.187509 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d4e1e5-919d-44d0-9c5d-e238325d9c00" containerName="mariadb-account-create-update" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187516 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d4e1e5-919d-44d0-9c5d-e238325d9c00" containerName="mariadb-account-create-update" Jan 21 13:22:33 crc kubenswrapper[4765]: E0121 13:22:33.187526 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd0d3a4-9cce-49d9-9497-3398221354b0" containerName="keystone-db-sync" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187532 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd0d3a4-9cce-49d9-9497-3398221354b0" containerName="keystone-db-sync" Jan 21 13:22:33 crc kubenswrapper[4765]: E0121 13:22:33.187544 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed971171-a23f-4ef1-9eec-28d47864b08f" containerName="mariadb-account-create-update" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187550 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed971171-a23f-4ef1-9eec-28d47864b08f" containerName="mariadb-account-create-update" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187696 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fae8415-8196-4888-90aa-8f40261530e4" containerName="mariadb-account-create-update" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187707 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76ba990-ea55-4459-8486-e413e80ba089" containerName="mariadb-database-create" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187714 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbc3ab54-aeb7-4536-a7c9-30078f148ec5" containerName="dnsmasq-dns" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187725 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="0107c27f-cb84-474c-8146-6fa6e03e0a8f" containerName="mariadb-database-create" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187737 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd0d3a4-9cce-49d9-9497-3398221354b0" containerName="keystone-db-sync" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187744 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ba0af3-6159-4e72-ab1d-f32955d76bfa" containerName="mariadb-database-create" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187754 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d4e1e5-919d-44d0-9c5d-e238325d9c00" containerName="mariadb-account-create-update" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.187765 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed971171-a23f-4ef1-9eec-28d47864b08f" containerName="mariadb-account-create-update" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.188760 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.217806 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-gxcv9"] Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.249112 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-g879q"] Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.250564 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.255354 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w98g4" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.255409 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.255491 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.255615 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.258053 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.284558 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g879q"] Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.313134 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.313493 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-fernet-keys\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.313548 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.313607 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.313670 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s25jn\" (UniqueName: \"kubernetes.io/projected/495fd7a5-eeae-4ebd-8606-ea08a366864e-kube-api-access-s25jn\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.313690 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-config-data\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.313708 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-config\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.313732 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-combined-ca-bundle\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.313758 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-scripts\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.313782 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvkvn\" (UniqueName: \"kubernetes.io/projected/df7728cd-9577-4616-bfaa-d0c5f1301e51-kube-api-access-pvkvn\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.313807 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-credential-keys\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.313828 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.415955 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.416018 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.416045 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-fernet-keys\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.416088 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.416129 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.416178 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s25jn\" (UniqueName: \"kubernetes.io/projected/495fd7a5-eeae-4ebd-8606-ea08a366864e-kube-api-access-s25jn\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.416195 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-config-data\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.416231 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-config\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.416260 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-combined-ca-bundle\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.416283 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-scripts\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.416308 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvkvn\" (UniqueName: \"kubernetes.io/projected/df7728cd-9577-4616-bfaa-d0c5f1301e51-kube-api-access-pvkvn\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.416329 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-credential-keys\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.417238 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.417294 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.418843 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.419072 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.420169 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-config\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.426311 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-config-data\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.431648 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-scripts\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.431797 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-credential-keys\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.454112 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-combined-ca-bundle\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.466043 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s25jn\" (UniqueName: \"kubernetes.io/projected/495fd7a5-eeae-4ebd-8606-ea08a366864e-kube-api-access-s25jn\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.470288 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvkvn\" (UniqueName: \"kubernetes.io/projected/df7728cd-9577-4616-bfaa-d0c5f1301e51-kube-api-access-pvkvn\") pod \"dnsmasq-dns-bbf5cc879-gxcv9\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.479097 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-fernet-keys\") pod \"keystone-bootstrap-g879q\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.515563 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.523494 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-548d6bd7d9-2w72v"] Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.524906 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.571643 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.571986 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.572157 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-9dt2x" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.572369 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.574299 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.621036 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-config-data\") pod \"horizon-548d6bd7d9-2w72v\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.621077 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86fjf\" (UniqueName: \"kubernetes.io/projected/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-kube-api-access-86fjf\") pod \"horizon-548d6bd7d9-2w72v\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.621153 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-scripts\") pod \"horizon-548d6bd7d9-2w72v\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.621187 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-horizon-secret-key\") pod \"horizon-548d6bd7d9-2w72v\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.621251 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-logs\") pod \"horizon-548d6bd7d9-2w72v\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.656325 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-548d6bd7d9-2w72v"] Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.707617 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-v4h97"] Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.709494 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.716020 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wwcwv" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.723025 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.723276 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.723773 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-scripts\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.723875 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-scripts\") pod \"horizon-548d6bd7d9-2w72v\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.723983 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-horizon-secret-key\") pod \"horizon-548d6bd7d9-2w72v\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.724019 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-logs\") pod \"horizon-548d6bd7d9-2w72v\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.724099 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-combined-ca-bundle\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.724191 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f0ee201-f570-4414-9feb-616192dfca3b-etc-machine-id\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.724231 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-config-data\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.724268 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-config-data\") pod \"horizon-548d6bd7d9-2w72v\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.724300 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86fjf\" (UniqueName: \"kubernetes.io/projected/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-kube-api-access-86fjf\") pod \"horizon-548d6bd7d9-2w72v\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.724361 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4kwq\" (UniqueName: \"kubernetes.io/projected/3f0ee201-f570-4414-9feb-616192dfca3b-kube-api-access-k4kwq\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.724413 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-db-sync-config-data\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.724950 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-logs\") pod \"horizon-548d6bd7d9-2w72v\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.726662 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-config-data\") pod \"horizon-548d6bd7d9-2w72v\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.726702 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-scripts\") pod \"horizon-548d6bd7d9-2w72v\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.727197 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-g87wz"] Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.732861 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g87wz" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.734522 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-horizon-secret-key\") pod \"horizon-548d6bd7d9-2w72v\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.746457 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.746861 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.747042 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-b4t8h" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.792775 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86fjf\" (UniqueName: \"kubernetes.io/projected/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-kube-api-access-86fjf\") pod \"horizon-548d6bd7d9-2w72v\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.800745 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-64bhp"] Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.803510 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-64bhp" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.809156 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xkmhm" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.809577 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.830920 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4kwq\" (UniqueName: \"kubernetes.io/projected/3f0ee201-f570-4414-9feb-616192dfca3b-kube-api-access-k4kwq\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.830996 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-db-sync-config-data\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.831085 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-combined-ca-bundle\") pod \"neutron-db-sync-g87wz\" (UID: \"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5\") " pod="openstack/neutron-db-sync-g87wz" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.831127 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7141df0-548e-4699-8620-4d85ba1b1218-combined-ca-bundle\") pod \"barbican-db-sync-64bhp\" (UID: \"e7141df0-548e-4699-8620-4d85ba1b1218\") " pod="openstack/barbican-db-sync-64bhp" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.831154 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e7141df0-548e-4699-8620-4d85ba1b1218-db-sync-config-data\") pod \"barbican-db-sync-64bhp\" (UID: \"e7141df0-548e-4699-8620-4d85ba1b1218\") " pod="openstack/barbican-db-sync-64bhp" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.831177 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-scripts\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.831281 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2k46\" (UniqueName: \"kubernetes.io/projected/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-kube-api-access-b2k46\") pod \"neutron-db-sync-g87wz\" (UID: \"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5\") " pod="openstack/neutron-db-sync-g87wz" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.831329 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p24bt\" (UniqueName: \"kubernetes.io/projected/e7141df0-548e-4699-8620-4d85ba1b1218-kube-api-access-p24bt\") pod \"barbican-db-sync-64bhp\" (UID: \"e7141df0-548e-4699-8620-4d85ba1b1218\") " pod="openstack/barbican-db-sync-64bhp" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.831358 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-config\") pod \"neutron-db-sync-g87wz\" (UID: \"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5\") " pod="openstack/neutron-db-sync-g87wz" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.831429 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-combined-ca-bundle\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.831499 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f0ee201-f570-4414-9feb-616192dfca3b-etc-machine-id\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.831533 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-config-data\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.832700 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f0ee201-f570-4414-9feb-616192dfca3b-etc-machine-id\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.845267 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-g87wz"] Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.847005 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-combined-ca-bundle\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.852847 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-db-sync-config-data\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.868741 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-scripts\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.878700 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-config-data\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.889031 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-64bhp"] Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.911000 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-v4h97"] Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.916276 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4kwq\" (UniqueName: \"kubernetes.io/projected/3f0ee201-f570-4414-9feb-616192dfca3b-kube-api-access-k4kwq\") pod \"cinder-db-sync-v4h97\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.937656 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.938311 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.940358 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.944295 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p24bt\" (UniqueName: \"kubernetes.io/projected/e7141df0-548e-4699-8620-4d85ba1b1218-kube-api-access-p24bt\") pod \"barbican-db-sync-64bhp\" (UID: \"e7141df0-548e-4699-8620-4d85ba1b1218\") " pod="openstack/barbican-db-sync-64bhp" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.944360 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-config\") pod \"neutron-db-sync-g87wz\" (UID: \"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5\") " pod="openstack/neutron-db-sync-g87wz" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.944507 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-combined-ca-bundle\") pod \"neutron-db-sync-g87wz\" (UID: \"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5\") " pod="openstack/neutron-db-sync-g87wz" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.944533 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7141df0-548e-4699-8620-4d85ba1b1218-combined-ca-bundle\") pod \"barbican-db-sync-64bhp\" (UID: \"e7141df0-548e-4699-8620-4d85ba1b1218\") " pod="openstack/barbican-db-sync-64bhp" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.944556 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e7141df0-548e-4699-8620-4d85ba1b1218-db-sync-config-data\") pod \"barbican-db-sync-64bhp\" (UID: \"e7141df0-548e-4699-8620-4d85ba1b1218\") " pod="openstack/barbican-db-sync-64bhp" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.944617 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2k46\" (UniqueName: \"kubernetes.io/projected/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-kube-api-access-b2k46\") pod \"neutron-db-sync-g87wz\" (UID: \"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5\") " pod="openstack/neutron-db-sync-g87wz" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.953179 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.953602 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.956416 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-6m7js"] Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.960513 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.963026 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7141df0-548e-4699-8620-4d85ba1b1218-combined-ca-bundle\") pod \"barbican-db-sync-64bhp\" (UID: \"e7141df0-548e-4699-8620-4d85ba1b1218\") " pod="openstack/barbican-db-sync-64bhp" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.974132 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fkgrt" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.974421 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.979909 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-config\") pod \"neutron-db-sync-g87wz\" (UID: \"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5\") " pod="openstack/neutron-db-sync-g87wz" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.980463 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.984184 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-combined-ca-bundle\") pod \"neutron-db-sync-g87wz\" (UID: \"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5\") " pod="openstack/neutron-db-sync-g87wz" Jan 21 13:22:33 crc kubenswrapper[4765]: I0121 13:22:33.994869 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6m7js"] Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.002815 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e7141df0-548e-4699-8620-4d85ba1b1218-db-sync-config-data\") pod \"barbican-db-sync-64bhp\" (UID: \"e7141df0-548e-4699-8620-4d85ba1b1218\") " pod="openstack/barbican-db-sync-64bhp" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.028964 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2k46\" (UniqueName: \"kubernetes.io/projected/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-kube-api-access-b2k46\") pod \"neutron-db-sync-g87wz\" (UID: \"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5\") " pod="openstack/neutron-db-sync-g87wz" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.029649 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p24bt\" (UniqueName: \"kubernetes.io/projected/e7141df0-548e-4699-8620-4d85ba1b1218-kube-api-access-p24bt\") pod \"barbican-db-sync-64bhp\" (UID: \"e7141df0-548e-4699-8620-4d85ba1b1218\") " pod="openstack/barbican-db-sync-64bhp" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.051633 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.053577 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-scripts\") pod \"placement-db-sync-6m7js\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.053642 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtxn6\" (UniqueName: \"kubernetes.io/projected/92340e7a-b249-4701-8527-eacaf9ba1fd7-kube-api-access-xtxn6\") pod \"placement-db-sync-6m7js\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.053683 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-config-data\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.053736 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.053770 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-config-data\") pod \"placement-db-sync-6m7js\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.053823 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.053855 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-combined-ca-bundle\") pod \"placement-db-sync-6m7js\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.053906 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78bb670d-da93-47aa-af39-981e6a9bff0f-run-httpd\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.053937 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-scripts\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.053959 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z57t9\" (UniqueName: \"kubernetes.io/projected/78bb670d-da93-47aa-af39-981e6a9bff0f-kube-api-access-z57t9\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.053999 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92340e7a-b249-4701-8527-eacaf9ba1fd7-logs\") pod \"placement-db-sync-6m7js\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.054024 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78bb670d-da93-47aa-af39-981e6a9bff0f-log-httpd\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.070429 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-v4h97" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.082775 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-gxcv9"] Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.088592 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g87wz" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.154194 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-79fdbccc5f-584ld"] Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.173115 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-64bhp" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.177176 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.179833 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-config-data\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.179891 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.179931 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-config-data\") pod \"placement-db-sync-6m7js\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.179976 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.180002 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-combined-ca-bundle\") pod \"placement-db-sync-6m7js\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.180040 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78bb670d-da93-47aa-af39-981e6a9bff0f-run-httpd\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.180066 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-scripts\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.180091 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z57t9\" (UniqueName: \"kubernetes.io/projected/78bb670d-da93-47aa-af39-981e6a9bff0f-kube-api-access-z57t9\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.180119 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92340e7a-b249-4701-8527-eacaf9ba1fd7-logs\") pod \"placement-db-sync-6m7js\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.180135 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78bb670d-da93-47aa-af39-981e6a9bff0f-log-httpd\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.180194 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-scripts\") pod \"placement-db-sync-6m7js\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.180234 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtxn6\" (UniqueName: \"kubernetes.io/projected/92340e7a-b249-4701-8527-eacaf9ba1fd7-kube-api-access-xtxn6\") pod \"placement-db-sync-6m7js\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.181523 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78bb670d-da93-47aa-af39-981e6a9bff0f-run-httpd\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.196624 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-combined-ca-bundle\") pod \"placement-db-sync-6m7js\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.197252 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92340e7a-b249-4701-8527-eacaf9ba1fd7-logs\") pod \"placement-db-sync-6m7js\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.201840 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-scripts\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.202620 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78bb670d-da93-47aa-af39-981e6a9bff0f-log-httpd\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.205969 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-scripts\") pod \"placement-db-sync-6m7js\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.210878 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtxn6\" (UniqueName: \"kubernetes.io/projected/92340e7a-b249-4701-8527-eacaf9ba1fd7-kube-api-access-xtxn6\") pod \"placement-db-sync-6m7js\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.218069 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-config-data\") pod \"placement-db-sync-6m7js\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.219874 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-config-data\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.243036 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.257347 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z57t9\" (UniqueName: \"kubernetes.io/projected/78bb670d-da93-47aa-af39-981e6a9bff0f-kube-api-access-z57t9\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.271773 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.277298 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.284708 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d1e05de-5888-4222-bf1f-1a27d64ff49c-config-data\") pod \"horizon-79fdbccc5f-584ld\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.284972 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvrpm\" (UniqueName: \"kubernetes.io/projected/1d1e05de-5888-4222-bf1f-1a27d64ff49c-kube-api-access-jvrpm\") pod \"horizon-79fdbccc5f-584ld\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.285063 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d1e05de-5888-4222-bf1f-1a27d64ff49c-logs\") pod \"horizon-79fdbccc5f-584ld\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.285449 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d1e05de-5888-4222-bf1f-1a27d64ff49c-scripts\") pod \"horizon-79fdbccc5f-584ld\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.285502 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d1e05de-5888-4222-bf1f-1a27d64ff49c-horizon-secret-key\") pod \"horizon-79fdbccc5f-584ld\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.323079 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-cnm96"] Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.326029 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.346515 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6m7js" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.388175 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d1e05de-5888-4222-bf1f-1a27d64ff49c-config-data\") pod \"horizon-79fdbccc5f-584ld\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.388421 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvrpm\" (UniqueName: \"kubernetes.io/projected/1d1e05de-5888-4222-bf1f-1a27d64ff49c-kube-api-access-jvrpm\") pod \"horizon-79fdbccc5f-584ld\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.388461 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d1e05de-5888-4222-bf1f-1a27d64ff49c-logs\") pod \"horizon-79fdbccc5f-584ld\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.388540 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d1e05de-5888-4222-bf1f-1a27d64ff49c-scripts\") pod \"horizon-79fdbccc5f-584ld\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.388600 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d1e05de-5888-4222-bf1f-1a27d64ff49c-horizon-secret-key\") pod \"horizon-79fdbccc5f-584ld\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.389735 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d1e05de-5888-4222-bf1f-1a27d64ff49c-logs\") pod \"horizon-79fdbccc5f-584ld\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.390412 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d1e05de-5888-4222-bf1f-1a27d64ff49c-scripts\") pod \"horizon-79fdbccc5f-584ld\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.394073 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d1e05de-5888-4222-bf1f-1a27d64ff49c-config-data\") pod \"horizon-79fdbccc5f-584ld\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.444743 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d1e05de-5888-4222-bf1f-1a27d64ff49c-horizon-secret-key\") pod \"horizon-79fdbccc5f-584ld\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.445116 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-cnm96"] Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.458494 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79fdbccc5f-584ld"] Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.459111 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvrpm\" (UniqueName: \"kubernetes.io/projected/1d1e05de-5888-4222-bf1f-1a27d64ff49c-kube-api-access-jvrpm\") pod \"horizon-79fdbccc5f-584ld\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.495332 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.495402 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.495462 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.495531 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.495578 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rl8b\" (UniqueName: \"kubernetes.io/projected/5d57e410-03d3-422a-ba44-f5a2ed1e1417-kube-api-access-8rl8b\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.495605 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-config\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.515361 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.516905 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.529169 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.529391 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.529655 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.529786 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8hh29" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.529847 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.569860 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.605192 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.605260 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.605303 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.605348 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.605381 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rl8b\" (UniqueName: \"kubernetes.io/projected/5d57e410-03d3-422a-ba44-f5a2ed1e1417-kube-api-access-8rl8b\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.605403 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-config\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.606481 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-config\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.607102 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.608198 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.608785 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.609383 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.684652 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rl8b\" (UniqueName: \"kubernetes.io/projected/5d57e410-03d3-422a-ba44-f5a2ed1e1417-kube-api-access-8rl8b\") pod \"dnsmasq-dns-56df8fb6b7-cnm96\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.707682 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.707757 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-config-data\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.707782 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.707810 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-scripts\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.707879 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.707974 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.708016 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drlbn\" (UniqueName: \"kubernetes.io/projected/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-kube-api-access-drlbn\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.708086 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-logs\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.814529 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.814583 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-config-data\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.814600 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.814621 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-scripts\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.814667 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.814724 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.814760 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drlbn\" (UniqueName: \"kubernetes.io/projected/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-kube-api-access-drlbn\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.814811 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-logs\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.815534 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-logs\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.817125 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.817482 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.856475 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.856514 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.857397 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-scripts\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.864149 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drlbn\" (UniqueName: \"kubernetes.io/projected/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-kube-api-access-drlbn\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.867050 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-config-data\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.874290 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.953344 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.955238 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.957514 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.973423 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.973833 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 13:22:34 crc kubenswrapper[4765]: I0121 13:22:34.990909 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.021784 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgzq4\" (UniqueName: \"kubernetes.io/projected/cc70eb67-290e-462d-9c4a-b9b6adff35cb-kube-api-access-dgzq4\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.023632 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc70eb67-290e-462d-9c4a-b9b6adff35cb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.024004 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.024156 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc70eb67-290e-462d-9c4a-b9b6adff35cb-logs\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.025921 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.026107 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.026192 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.026428 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.128053 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.134239 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc70eb67-290e-462d-9c4a-b9b6adff35cb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.134303 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.134336 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc70eb67-290e-462d-9c4a-b9b6adff35cb-logs\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.134351 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.134392 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.134410 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.134466 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.134514 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgzq4\" (UniqueName: \"kubernetes.io/projected/cc70eb67-290e-462d-9c4a-b9b6adff35cb-kube-api-access-dgzq4\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.135362 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc70eb67-290e-462d-9c4a-b9b6adff35cb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.136428 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.137297 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc70eb67-290e-462d-9c4a-b9b6adff35cb-logs\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.147306 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.148444 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.167313 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.172891 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.188957 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgzq4\" (UniqueName: \"kubernetes.io/projected/cc70eb67-290e-462d-9c4a-b9b6adff35cb-kube-api-access-dgzq4\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.227577 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.329417 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.374721 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g879q"] Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.403033 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-gxcv9"] Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.583732 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-v4h97"] Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.594375 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-548d6bd7d9-2w72v"] Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.602127 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-g87wz"] Jan 21 13:22:35 crc kubenswrapper[4765]: W0121 13:22:35.610920 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f0ee201_f570_4414_9feb_616192dfca3b.slice/crio-3c1012177a49b4c2fcf07a16a467d0a50c02a5deddde6df208feb01bdb2eb84c WatchSource:0}: Error finding container 3c1012177a49b4c2fcf07a16a467d0a50c02a5deddde6df208feb01bdb2eb84c: Status 404 returned error can't find the container with id 3c1012177a49b4c2fcf07a16a467d0a50c02a5deddde6df208feb01bdb2eb84c Jan 21 13:22:35 crc kubenswrapper[4765]: W0121 13:22:35.638444 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0ff6c3e_fac2_4dbb_9ce8_a31b2019b1ca.slice/crio-324ca630edf28f4579aee26fe3ca46390858f524bba01905f37824475cbf149c WatchSource:0}: Error finding container 324ca630edf28f4579aee26fe3ca46390858f524bba01905f37824475cbf149c: Status 404 returned error can't find the container with id 324ca630edf28f4579aee26fe3ca46390858f524bba01905f37824475cbf149c Jan 21 13:22:35 crc kubenswrapper[4765]: W0121 13:22:35.672961 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f11ea5c_ca3f_4188_85f5_ba8994e1a7a5.slice/crio-fa3af29394628990df3578e8688fc8e46fae8fbfb260467fc9bd6078935cf0c6 WatchSource:0}: Error finding container fa3af29394628990df3578e8688fc8e46fae8fbfb260467fc9bd6078935cf0c6: Status 404 returned error can't find the container with id fa3af29394628990df3578e8688fc8e46fae8fbfb260467fc9bd6078935cf0c6 Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.823540 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-64bhp"] Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.857123 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.955441 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g879q" event={"ID":"495fd7a5-eeae-4ebd-8606-ea08a366864e","Type":"ContainerStarted","Data":"8d605cc7ccf3d837cef8b78c80357c15df771a2a7f737872c369b5e655344bf8"} Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.955500 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g879q" event={"ID":"495fd7a5-eeae-4ebd-8606-ea08a366864e","Type":"ContainerStarted","Data":"19ad8185cec84f9fb500fa1ba1c01c587ae984332198e8077a909241d3988109"} Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.964821 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-64bhp" event={"ID":"e7141df0-548e-4699-8620-4d85ba1b1218","Type":"ContainerStarted","Data":"08d3ad05ed11744317a12ff425bc0fd49a867967dc12bfd42415a3e51b2eaf7d"} Jan 21 13:22:35 crc kubenswrapper[4765]: I0121 13:22:35.998451 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" event={"ID":"df7728cd-9577-4616-bfaa-d0c5f1301e51","Type":"ContainerStarted","Data":"ba915601d02fe98ffee603d6259a81fc44443dadaa463cd94fb6e9b11c6b6c61"} Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.002730 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-g879q" podStartSLOduration=3.002710687 podStartE2EDuration="3.002710687s" podCreationTimestamp="2026-01-21 13:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:22:35.989307863 +0000 UTC m=+1217.007033705" watchObservedRunningTime="2026-01-21 13:22:36.002710687 +0000 UTC m=+1217.020436509" Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.013511 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g87wz" event={"ID":"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5","Type":"ContainerStarted","Data":"fa3af29394628990df3578e8688fc8e46fae8fbfb260467fc9bd6078935cf0c6"} Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.015984 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-v4h97" event={"ID":"3f0ee201-f570-4414-9feb-616192dfca3b","Type":"ContainerStarted","Data":"3c1012177a49b4c2fcf07a16a467d0a50c02a5deddde6df208feb01bdb2eb84c"} Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.017355 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78bb670d-da93-47aa-af39-981e6a9bff0f","Type":"ContainerStarted","Data":"27878fb9ef3b43c2faa8b9d076a6722e1831790d203a369cf79cce3e50aaf1fa"} Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.028953 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-548d6bd7d9-2w72v" event={"ID":"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca","Type":"ContainerStarted","Data":"324ca630edf28f4579aee26fe3ca46390858f524bba01905f37824475cbf149c"} Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.108070 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6m7js"] Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.278244 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-cnm96"] Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.323990 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79fdbccc5f-584ld"] Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.492171 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.683873 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.744990 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 13:22:36 crc kubenswrapper[4765]: W0121 13:22:36.772794 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc70eb67_290e_462d_9c4a_b9b6adff35cb.slice/crio-6ae93f668b4162c9f3aca3bce7645eebfe3904a72a0dc85064778d6b94b03ec1 WatchSource:0}: Error finding container 6ae93f668b4162c9f3aca3bce7645eebfe3904a72a0dc85064778d6b94b03ec1: Status 404 returned error can't find the container with id 6ae93f668b4162c9f3aca3bce7645eebfe3904a72a0dc85064778d6b94b03ec1 Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.811359 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-548d6bd7d9-2w72v"] Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.841112 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d68b4fcfc-tnw87"] Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.843394 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.863123 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d68b4fcfc-tnw87"] Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.883407 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/784cf761-2eea-4807-b2bd-94d7dcddecc2-scripts\") pod \"horizon-7d68b4fcfc-tnw87\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.883525 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbsf7\" (UniqueName: \"kubernetes.io/projected/784cf761-2eea-4807-b2bd-94d7dcddecc2-kube-api-access-nbsf7\") pod \"horizon-7d68b4fcfc-tnw87\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.883551 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/784cf761-2eea-4807-b2bd-94d7dcddecc2-horizon-secret-key\") pod \"horizon-7d68b4fcfc-tnw87\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.883596 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/784cf761-2eea-4807-b2bd-94d7dcddecc2-config-data\") pod \"horizon-7d68b4fcfc-tnw87\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.883621 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/784cf761-2eea-4807-b2bd-94d7dcddecc2-logs\") pod \"horizon-7d68b4fcfc-tnw87\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.911092 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.920734 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.986174 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/784cf761-2eea-4807-b2bd-94d7dcddecc2-horizon-secret-key\") pod \"horizon-7d68b4fcfc-tnw87\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.986261 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbsf7\" (UniqueName: \"kubernetes.io/projected/784cf761-2eea-4807-b2bd-94d7dcddecc2-kube-api-access-nbsf7\") pod \"horizon-7d68b4fcfc-tnw87\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.986304 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/784cf761-2eea-4807-b2bd-94d7dcddecc2-config-data\") pod \"horizon-7d68b4fcfc-tnw87\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.986320 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/784cf761-2eea-4807-b2bd-94d7dcddecc2-logs\") pod \"horizon-7d68b4fcfc-tnw87\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.986408 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/784cf761-2eea-4807-b2bd-94d7dcddecc2-scripts\") pod \"horizon-7d68b4fcfc-tnw87\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.987184 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/784cf761-2eea-4807-b2bd-94d7dcddecc2-scripts\") pod \"horizon-7d68b4fcfc-tnw87\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.989487 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/784cf761-2eea-4807-b2bd-94d7dcddecc2-config-data\") pod \"horizon-7d68b4fcfc-tnw87\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:36 crc kubenswrapper[4765]: I0121 13:22:36.989752 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/784cf761-2eea-4807-b2bd-94d7dcddecc2-logs\") pod \"horizon-7d68b4fcfc-tnw87\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.004000 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/784cf761-2eea-4807-b2bd-94d7dcddecc2-horizon-secret-key\") pod \"horizon-7d68b4fcfc-tnw87\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.026805 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbsf7\" (UniqueName: \"kubernetes.io/projected/784cf761-2eea-4807-b2bd-94d7dcddecc2-kube-api-access-nbsf7\") pod \"horizon-7d68b4fcfc-tnw87\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.076148 4765 generic.go:334] "Generic (PLEG): container finished" podID="df7728cd-9577-4616-bfaa-d0c5f1301e51" containerID="d0c4894b819db7ee7ec3650881397ba25617eddd1056f5901510d0da65fccd28" exitCode=0 Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.076252 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" event={"ID":"df7728cd-9577-4616-bfaa-d0c5f1301e51","Type":"ContainerDied","Data":"d0c4894b819db7ee7ec3650881397ba25617eddd1056f5901510d0da65fccd28"} Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.084556 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc70eb67-290e-462d-9c4a-b9b6adff35cb","Type":"ContainerStarted","Data":"6ae93f668b4162c9f3aca3bce7645eebfe3904a72a0dc85064778d6b94b03ec1"} Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.099535 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" event={"ID":"5d57e410-03d3-422a-ba44-f5a2ed1e1417","Type":"ContainerStarted","Data":"15d9e92db3a6fea01aa4fffe3d9549a2cc4ffe7ec0aa98237fb934b6910d538f"} Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.123112 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g87wz" event={"ID":"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5","Type":"ContainerStarted","Data":"55a7211486ad090d246dd116d0b0b13604208a9504841e230e9d04aabbf7b482"} Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.133642 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6m7js" event={"ID":"92340e7a-b249-4701-8527-eacaf9ba1fd7","Type":"ContainerStarted","Data":"a08eac820b85d27ce549beef14e9add370113edd7ab615b3497dd20b1d0ff79a"} Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.138354 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79fdbccc5f-584ld" event={"ID":"1d1e05de-5888-4222-bf1f-1a27d64ff49c","Type":"ContainerStarted","Data":"8ab365676adc16db79401d8e412a879dacdd0263d5c31c54520416d796497b30"} Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.145309 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-g87wz" podStartSLOduration=4.145291926 podStartE2EDuration="4.145291926s" podCreationTimestamp="2026-01-21 13:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:22:37.143910155 +0000 UTC m=+1218.161635977" watchObservedRunningTime="2026-01-21 13:22:37.145291926 +0000 UTC m=+1218.163017748" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.156438 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562","Type":"ContainerStarted","Data":"0e95bb3c05a315812fecdeaa5ef8d479f07505a984c18fc73031b7ce664de25b"} Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.258703 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.644914 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.821790 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-ovsdbserver-sb\") pod \"df7728cd-9577-4616-bfaa-d0c5f1301e51\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.821899 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-ovsdbserver-nb\") pod \"df7728cd-9577-4616-bfaa-d0c5f1301e51\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.821930 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-config\") pod \"df7728cd-9577-4616-bfaa-d0c5f1301e51\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.822019 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-dns-swift-storage-0\") pod \"df7728cd-9577-4616-bfaa-d0c5f1301e51\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.822042 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvkvn\" (UniqueName: \"kubernetes.io/projected/df7728cd-9577-4616-bfaa-d0c5f1301e51-kube-api-access-pvkvn\") pod \"df7728cd-9577-4616-bfaa-d0c5f1301e51\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.822175 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-dns-svc\") pod \"df7728cd-9577-4616-bfaa-d0c5f1301e51\" (UID: \"df7728cd-9577-4616-bfaa-d0c5f1301e51\") " Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.844394 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7728cd-9577-4616-bfaa-d0c5f1301e51-kube-api-access-pvkvn" (OuterVolumeSpecName: "kube-api-access-pvkvn") pod "df7728cd-9577-4616-bfaa-d0c5f1301e51" (UID: "df7728cd-9577-4616-bfaa-d0c5f1301e51"). InnerVolumeSpecName "kube-api-access-pvkvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.861058 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "df7728cd-9577-4616-bfaa-d0c5f1301e51" (UID: "df7728cd-9577-4616-bfaa-d0c5f1301e51"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.867102 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "df7728cd-9577-4616-bfaa-d0c5f1301e51" (UID: "df7728cd-9577-4616-bfaa-d0c5f1301e51"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.870449 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-config" (OuterVolumeSpecName: "config") pod "df7728cd-9577-4616-bfaa-d0c5f1301e51" (UID: "df7728cd-9577-4616-bfaa-d0c5f1301e51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.887611 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df7728cd-9577-4616-bfaa-d0c5f1301e51" (UID: "df7728cd-9577-4616-bfaa-d0c5f1301e51"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.888400 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df7728cd-9577-4616-bfaa-d0c5f1301e51" (UID: "df7728cd-9577-4616-bfaa-d0c5f1301e51"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.923839 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.924845 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.924862 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.924872 4765 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.924880 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvkvn\" (UniqueName: \"kubernetes.io/projected/df7728cd-9577-4616-bfaa-d0c5f1301e51-kube-api-access-pvkvn\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:37 crc kubenswrapper[4765]: I0121 13:22:37.924890 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df7728cd-9577-4616-bfaa-d0c5f1301e51-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:38 crc kubenswrapper[4765]: I0121 13:22:38.056836 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d68b4fcfc-tnw87"] Jan 21 13:22:38 crc kubenswrapper[4765]: I0121 13:22:38.172837 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" event={"ID":"df7728cd-9577-4616-bfaa-d0c5f1301e51","Type":"ContainerDied","Data":"ba915601d02fe98ffee603d6259a81fc44443dadaa463cd94fb6e9b11c6b6c61"} Jan 21 13:22:38 crc kubenswrapper[4765]: I0121 13:22:38.172918 4765 scope.go:117] "RemoveContainer" containerID="d0c4894b819db7ee7ec3650881397ba25617eddd1056f5901510d0da65fccd28" Jan 21 13:22:38 crc kubenswrapper[4765]: I0121 13:22:38.174321 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-gxcv9" Jan 21 13:22:38 crc kubenswrapper[4765]: I0121 13:22:38.176718 4765 generic.go:334] "Generic (PLEG): container finished" podID="5d57e410-03d3-422a-ba44-f5a2ed1e1417" containerID="cf840497ecd2ecf9bbaf1281ba69019067972d1472cc515d9f17d55a7f7a836c" exitCode=0 Jan 21 13:22:38 crc kubenswrapper[4765]: I0121 13:22:38.176827 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" event={"ID":"5d57e410-03d3-422a-ba44-f5a2ed1e1417","Type":"ContainerDied","Data":"cf840497ecd2ecf9bbaf1281ba69019067972d1472cc515d9f17d55a7f7a836c"} Jan 21 13:22:38 crc kubenswrapper[4765]: I0121 13:22:38.185608 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d68b4fcfc-tnw87" event={"ID":"784cf761-2eea-4807-b2bd-94d7dcddecc2","Type":"ContainerStarted","Data":"eb6929c849be68783d0bf31343202d0b2634ec0418b75574f5ab28df86ac35c3"} Jan 21 13:22:38 crc kubenswrapper[4765]: I0121 13:22:38.295154 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-gxcv9"] Jan 21 13:22:38 crc kubenswrapper[4765]: I0121 13:22:38.310707 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-gxcv9"] Jan 21 13:22:39 crc kubenswrapper[4765]: I0121 13:22:39.248512 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" event={"ID":"5d57e410-03d3-422a-ba44-f5a2ed1e1417","Type":"ContainerStarted","Data":"571e112947afaefee85d3cb4e2b6d6d0f03f65661f6d1e8d32a2f104c267813d"} Jan 21 13:22:39 crc kubenswrapper[4765]: I0121 13:22:39.249306 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:39 crc kubenswrapper[4765]: I0121 13:22:39.258505 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc70eb67-290e-462d-9c4a-b9b6adff35cb","Type":"ContainerStarted","Data":"7d02961743e2b97069451ae25fdf08a23ac40542b27d1496cec26a583b46439c"} Jan 21 13:22:39 crc kubenswrapper[4765]: I0121 13:22:39.270611 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" podStartSLOduration=5.270593417 podStartE2EDuration="5.270593417s" podCreationTimestamp="2026-01-21 13:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:22:39.268085373 +0000 UTC m=+1220.285811195" watchObservedRunningTime="2026-01-21 13:22:39.270593417 +0000 UTC m=+1220.288319239" Jan 21 13:22:39 crc kubenswrapper[4765]: I0121 13:22:39.294786 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562","Type":"ContainerStarted","Data":"3ec5ced41fffab823682f1a854c5134ba3041bad17168cce3fa2065df5d319b9"} Jan 21 13:22:39 crc kubenswrapper[4765]: I0121 13:22:39.638727 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df7728cd-9577-4616-bfaa-d0c5f1301e51" path="/var/lib/kubelet/pods/df7728cd-9577-4616-bfaa-d0c5f1301e51/volumes" Jan 21 13:22:42 crc kubenswrapper[4765]: I0121 13:22:42.348911 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc70eb67-290e-462d-9c4a-b9b6adff35cb","Type":"ContainerStarted","Data":"d87a2d5bee7860b1928d43ff257ff0b5eda0b79f5d9a0215393d76e18f608dc8"} Jan 21 13:22:42 crc kubenswrapper[4765]: I0121 13:22:42.349117 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cc70eb67-290e-462d-9c4a-b9b6adff35cb" containerName="glance-log" containerID="cri-o://7d02961743e2b97069451ae25fdf08a23ac40542b27d1496cec26a583b46439c" gracePeriod=30 Jan 21 13:22:42 crc kubenswrapper[4765]: I0121 13:22:42.349451 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cc70eb67-290e-462d-9c4a-b9b6adff35cb" containerName="glance-httpd" containerID="cri-o://d87a2d5bee7860b1928d43ff257ff0b5eda0b79f5d9a0215393d76e18f608dc8" gracePeriod=30 Jan 21 13:22:42 crc kubenswrapper[4765]: I0121 13:22:42.369372 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562","Type":"ContainerStarted","Data":"ff358a461fc9d904f53f1fd5b5c47e779a60fe4e0ea88829610b92b8783a135f"} Jan 21 13:22:42 crc kubenswrapper[4765]: I0121 13:22:42.369505 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" containerName="glance-log" containerID="cri-o://3ec5ced41fffab823682f1a854c5134ba3041bad17168cce3fa2065df5d319b9" gracePeriod=30 Jan 21 13:22:42 crc kubenswrapper[4765]: I0121 13:22:42.369641 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" containerName="glance-httpd" containerID="cri-o://ff358a461fc9d904f53f1fd5b5c47e779a60fe4e0ea88829610b92b8783a135f" gracePeriod=30 Jan 21 13:22:42 crc kubenswrapper[4765]: I0121 13:22:42.382320 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.382298121 podStartE2EDuration="9.382298121s" podCreationTimestamp="2026-01-21 13:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:22:42.376702726 +0000 UTC m=+1223.394428548" watchObservedRunningTime="2026-01-21 13:22:42.382298121 +0000 UTC m=+1223.400023943" Jan 21 13:22:42 crc kubenswrapper[4765]: I0121 13:22:42.411863 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.411844189 podStartE2EDuration="8.411844189s" podCreationTimestamp="2026-01-21 13:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:22:42.406424319 +0000 UTC m=+1223.424150161" watchObservedRunningTime="2026-01-21 13:22:42.411844189 +0000 UTC m=+1223.429570011" Jan 21 13:22:42 crc kubenswrapper[4765]: I0121 13:22:42.830924 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79fdbccc5f-584ld"] Jan 21 13:22:42 crc kubenswrapper[4765]: I0121 13:22:42.930700 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6558674dbd-lct5s"] Jan 21 13:22:42 crc kubenswrapper[4765]: E0121 13:22:42.931265 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7728cd-9577-4616-bfaa-d0c5f1301e51" containerName="init" Jan 21 13:22:42 crc kubenswrapper[4765]: I0121 13:22:42.931286 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7728cd-9577-4616-bfaa-d0c5f1301e51" containerName="init" Jan 21 13:22:42 crc kubenswrapper[4765]: I0121 13:22:42.931525 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7728cd-9577-4616-bfaa-d0c5f1301e51" containerName="init" Jan 21 13:22:42 crc kubenswrapper[4765]: I0121 13:22:42.940733 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:42 crc kubenswrapper[4765]: I0121 13:22:42.945597 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 21 13:22:42 crc kubenswrapper[4765]: I0121 13:22:42.954666 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6558674dbd-lct5s"] Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.002251 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/074ae613-bc7f-4443-abdb-7010b6054997-scripts\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.002309 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf8xn\" (UniqueName: \"kubernetes.io/projected/074ae613-bc7f-4443-abdb-7010b6054997-kube-api-access-lf8xn\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.002339 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-horizon-tls-certs\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.002382 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/074ae613-bc7f-4443-abdb-7010b6054997-logs\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.002421 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/074ae613-bc7f-4443-abdb-7010b6054997-config-data\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.002455 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-combined-ca-bundle\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.002477 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-horizon-secret-key\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.052355 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d68b4fcfc-tnw87"] Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.164359 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-86c57777f6-gqpgv"] Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.166178 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.175230 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1241b1f0-34c1-401a-b91f-13b72926cc2c-combined-ca-bundle\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.175354 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/074ae613-bc7f-4443-abdb-7010b6054997-config-data\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.175491 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-combined-ca-bundle\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.175547 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-horizon-secret-key\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.175627 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1241b1f0-34c1-401a-b91f-13b72926cc2c-logs\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.175663 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1241b1f0-34c1-401a-b91f-13b72926cc2c-config-data\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.175784 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1241b1f0-34c1-401a-b91f-13b72926cc2c-horizon-secret-key\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.175867 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l62l\" (UniqueName: \"kubernetes.io/projected/1241b1f0-34c1-401a-b91f-13b72926cc2c-kube-api-access-6l62l\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.175951 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/074ae613-bc7f-4443-abdb-7010b6054997-scripts\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.176257 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf8xn\" (UniqueName: \"kubernetes.io/projected/074ae613-bc7f-4443-abdb-7010b6054997-kube-api-access-lf8xn\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.176330 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-horizon-tls-certs\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.176469 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1241b1f0-34c1-401a-b91f-13b72926cc2c-scripts\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.176549 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1241b1f0-34c1-401a-b91f-13b72926cc2c-horizon-tls-certs\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.176574 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/074ae613-bc7f-4443-abdb-7010b6054997-logs\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.194858 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/074ae613-bc7f-4443-abdb-7010b6054997-scripts\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.195311 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-horizon-secret-key\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.195794 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/074ae613-bc7f-4443-abdb-7010b6054997-logs\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.236855 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-horizon-tls-certs\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.237654 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf8xn\" (UniqueName: \"kubernetes.io/projected/074ae613-bc7f-4443-abdb-7010b6054997-kube-api-access-lf8xn\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.244124 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/074ae613-bc7f-4443-abdb-7010b6054997-config-data\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.257136 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-combined-ca-bundle\") pod \"horizon-6558674dbd-lct5s\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.278466 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.303586 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1241b1f0-34c1-401a-b91f-13b72926cc2c-logs\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.303645 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1241b1f0-34c1-401a-b91f-13b72926cc2c-config-data\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.303738 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1241b1f0-34c1-401a-b91f-13b72926cc2c-horizon-secret-key\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.303774 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l62l\" (UniqueName: \"kubernetes.io/projected/1241b1f0-34c1-401a-b91f-13b72926cc2c-kube-api-access-6l62l\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.303970 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1241b1f0-34c1-401a-b91f-13b72926cc2c-scripts\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.303997 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1241b1f0-34c1-401a-b91f-13b72926cc2c-horizon-tls-certs\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.304057 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1241b1f0-34c1-401a-b91f-13b72926cc2c-combined-ca-bundle\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.313359 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1241b1f0-34c1-401a-b91f-13b72926cc2c-horizon-secret-key\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.321284 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86c57777f6-gqpgv"] Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.342758 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1241b1f0-34c1-401a-b91f-13b72926cc2c-horizon-tls-certs\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.342811 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1241b1f0-34c1-401a-b91f-13b72926cc2c-combined-ca-bundle\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.343197 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1241b1f0-34c1-401a-b91f-13b72926cc2c-scripts\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.344942 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1241b1f0-34c1-401a-b91f-13b72926cc2c-config-data\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.345992 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1241b1f0-34c1-401a-b91f-13b72926cc2c-logs\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.372523 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l62l\" (UniqueName: \"kubernetes.io/projected/1241b1f0-34c1-401a-b91f-13b72926cc2c-kube-api-access-6l62l\") pod \"horizon-86c57777f6-gqpgv\" (UID: \"1241b1f0-34c1-401a-b91f-13b72926cc2c\") " pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.373841 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.462102 4765 generic.go:334] "Generic (PLEG): container finished" podID="7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" containerID="ff358a461fc9d904f53f1fd5b5c47e779a60fe4e0ea88829610b92b8783a135f" exitCode=143 Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.462160 4765 generic.go:334] "Generic (PLEG): container finished" podID="7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" containerID="3ec5ced41fffab823682f1a854c5134ba3041bad17168cce3fa2065df5d319b9" exitCode=143 Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.462283 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562","Type":"ContainerDied","Data":"ff358a461fc9d904f53f1fd5b5c47e779a60fe4e0ea88829610b92b8783a135f"} Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.462317 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562","Type":"ContainerDied","Data":"3ec5ced41fffab823682f1a854c5134ba3041bad17168cce3fa2065df5d319b9"} Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.483392 4765 generic.go:334] "Generic (PLEG): container finished" podID="cc70eb67-290e-462d-9c4a-b9b6adff35cb" containerID="d87a2d5bee7860b1928d43ff257ff0b5eda0b79f5d9a0215393d76e18f608dc8" exitCode=143 Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.483430 4765 generic.go:334] "Generic (PLEG): container finished" podID="cc70eb67-290e-462d-9c4a-b9b6adff35cb" containerID="7d02961743e2b97069451ae25fdf08a23ac40542b27d1496cec26a583b46439c" exitCode=143 Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.483451 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc70eb67-290e-462d-9c4a-b9b6adff35cb","Type":"ContainerDied","Data":"d87a2d5bee7860b1928d43ff257ff0b5eda0b79f5d9a0215393d76e18f608dc8"} Jan 21 13:22:43 crc kubenswrapper[4765]: I0121 13:22:43.483495 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc70eb67-290e-462d-9c4a-b9b6adff35cb","Type":"ContainerDied","Data":"7d02961743e2b97069451ae25fdf08a23ac40542b27d1496cec26a583b46439c"} Jan 21 13:22:44 crc kubenswrapper[4765]: I0121 13:22:44.445592 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:22:44 crc kubenswrapper[4765]: I0121 13:22:44.445930 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:22:44 crc kubenswrapper[4765]: I0121 13:22:44.445989 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:22:44 crc kubenswrapper[4765]: I0121 13:22:44.446839 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6699bbbe2d11832c001ff2e320299357488d5335ab1941c1de1fb9e99aec3a1"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:22:44 crc kubenswrapper[4765]: I0121 13:22:44.446897 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://d6699bbbe2d11832c001ff2e320299357488d5335ab1941c1de1fb9e99aec3a1" gracePeriod=600 Jan 21 13:22:44 crc kubenswrapper[4765]: I0121 13:22:44.961478 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:22:45 crc kubenswrapper[4765]: I0121 13:22:45.052592 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-w9btc"] Jan 21 13:22:45 crc kubenswrapper[4765]: I0121 13:22:45.052916 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" podUID="121a128d-b52a-4cb6-a62c-34380823877c" containerName="dnsmasq-dns" containerID="cri-o://cc03b92e62ccb8d90084d7be2f638ce4c082aba4a211f71aec3bbfa9509605c1" gracePeriod=10 Jan 21 13:22:45 crc kubenswrapper[4765]: I0121 13:22:45.507887 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="d6699bbbe2d11832c001ff2e320299357488d5335ab1941c1de1fb9e99aec3a1" exitCode=0 Jan 21 13:22:45 crc kubenswrapper[4765]: I0121 13:22:45.507964 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"d6699bbbe2d11832c001ff2e320299357488d5335ab1941c1de1fb9e99aec3a1"} Jan 21 13:22:45 crc kubenswrapper[4765]: I0121 13:22:45.507998 4765 scope.go:117] "RemoveContainer" containerID="3163e8db45db8b9601f45b03cbef2661d131b6e749b48c66d1778284a24a76c2" Jan 21 13:22:45 crc kubenswrapper[4765]: I0121 13:22:45.511098 4765 generic.go:334] "Generic (PLEG): container finished" podID="121a128d-b52a-4cb6-a62c-34380823877c" containerID="cc03b92e62ccb8d90084d7be2f638ce4c082aba4a211f71aec3bbfa9509605c1" exitCode=0 Jan 21 13:22:45 crc kubenswrapper[4765]: I0121 13:22:45.511203 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" event={"ID":"121a128d-b52a-4cb6-a62c-34380823877c","Type":"ContainerDied","Data":"cc03b92e62ccb8d90084d7be2f638ce4c082aba4a211f71aec3bbfa9509605c1"} Jan 21 13:22:46 crc kubenswrapper[4765]: I0121 13:22:46.489984 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" podUID="121a128d-b52a-4cb6-a62c-34380823877c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 21 13:22:46 crc kubenswrapper[4765]: I0121 13:22:46.521764 4765 generic.go:334] "Generic (PLEG): container finished" podID="495fd7a5-eeae-4ebd-8606-ea08a366864e" containerID="8d605cc7ccf3d837cef8b78c80357c15df771a2a7f737872c369b5e655344bf8" exitCode=0 Jan 21 13:22:46 crc kubenswrapper[4765]: I0121 13:22:46.522006 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g879q" event={"ID":"495fd7a5-eeae-4ebd-8606-ea08a366864e","Type":"ContainerDied","Data":"8d605cc7ccf3d837cef8b78c80357c15df771a2a7f737872c369b5e655344bf8"} Jan 21 13:22:51 crc kubenswrapper[4765]: I0121 13:22:51.490569 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" podUID="121a128d-b52a-4cb6-a62c-34380823877c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 21 13:22:56 crc kubenswrapper[4765]: I0121 13:22:56.489716 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" podUID="121a128d-b52a-4cb6-a62c-34380823877c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 21 13:22:56 crc kubenswrapper[4765]: I0121 13:22:56.490526 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:22:56 crc kubenswrapper[4765]: E0121 13:22:56.760162 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 21 13:22:56 crc kubenswrapper[4765]: E0121 13:22:56.760375 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n555h5f7h58bh4h9h664h646h79hcfh679h5dbh66bhfbh647h65dh8ch66h6dh65fh94h78h5d9h8bh67bhb5h5ch5f5h5f6h54dh568hd8h65q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nbsf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7d68b4fcfc-tnw87_openstack(784cf761-2eea-4807-b2bd-94d7dcddecc2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:22:56 crc kubenswrapper[4765]: E0121 13:22:56.937956 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7d68b4fcfc-tnw87" podUID="784cf761-2eea-4807-b2bd-94d7dcddecc2" Jan 21 13:22:56 crc kubenswrapper[4765]: E0121 13:22:56.943014 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 21 13:22:56 crc kubenswrapper[4765]: E0121 13:22:56.943317 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndbh547h685h75h565h8fh5bchbch57bh68dh96h68ch565h9bh54dh64h5cdh5d7h6ch5fch675h5f9h648h5b8h6hfch666h59ch577h597h547h75q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86fjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-548d6bd7d9-2w72v_openstack(e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:22:56 crc kubenswrapper[4765]: E0121 13:22:56.945448 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-548d6bd7d9-2w72v" podUID="e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.100970 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.110594 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.127732 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.162639 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-combined-ca-bundle\") pod \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.162723 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-config-data\") pod \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.162783 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-httpd-run\") pod \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.162867 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drlbn\" (UniqueName: \"kubernetes.io/projected/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-kube-api-access-drlbn\") pod \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.163728 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-logs\") pod \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.163782 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-scripts\") pod \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.163829 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.163907 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-public-tls-certs\") pod \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\" (UID: \"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.165941 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" (UID: "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.168086 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-logs" (OuterVolumeSpecName: "logs") pod "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" (UID: "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.172339 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" (UID: "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.191140 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-kube-api-access-drlbn" (OuterVolumeSpecName: "kube-api-access-drlbn") pod "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" (UID: "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562"). InnerVolumeSpecName "kube-api-access-drlbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.191277 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-scripts" (OuterVolumeSpecName: "scripts") pod "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" (UID: "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.265539 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-combined-ca-bundle\") pod \"495fd7a5-eeae-4ebd-8606-ea08a366864e\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.265603 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-internal-tls-certs\") pod \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.265673 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc70eb67-290e-462d-9c4a-b9b6adff35cb-logs\") pod \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.265721 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-scripts\") pod \"495fd7a5-eeae-4ebd-8606-ea08a366864e\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.265741 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-scripts\") pod \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.265765 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.265791 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-fernet-keys\") pod \"495fd7a5-eeae-4ebd-8606-ea08a366864e\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.265865 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc70eb67-290e-462d-9c4a-b9b6adff35cb-httpd-run\") pod \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.265888 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-combined-ca-bundle\") pod \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.265925 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-config-data\") pod \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.265944 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-credential-keys\") pod \"495fd7a5-eeae-4ebd-8606-ea08a366864e\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.266011 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgzq4\" (UniqueName: \"kubernetes.io/projected/cc70eb67-290e-462d-9c4a-b9b6adff35cb-kube-api-access-dgzq4\") pod \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\" (UID: \"cc70eb67-290e-462d-9c4a-b9b6adff35cb\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.266079 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-config-data\") pod \"495fd7a5-eeae-4ebd-8606-ea08a366864e\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.266106 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s25jn\" (UniqueName: \"kubernetes.io/projected/495fd7a5-eeae-4ebd-8606-ea08a366864e-kube-api-access-s25jn\") pod \"495fd7a5-eeae-4ebd-8606-ea08a366864e\" (UID: \"495fd7a5-eeae-4ebd-8606-ea08a366864e\") " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.266563 4765 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.266577 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drlbn\" (UniqueName: \"kubernetes.io/projected/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-kube-api-access-drlbn\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.266587 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.266595 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.266614 4765 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.269390 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc70eb67-290e-462d-9c4a-b9b6adff35cb-logs" (OuterVolumeSpecName: "logs") pod "cc70eb67-290e-462d-9c4a-b9b6adff35cb" (UID: "cc70eb67-290e-462d-9c4a-b9b6adff35cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.282455 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" (UID: "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.290667 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc70eb67-290e-462d-9c4a-b9b6adff35cb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cc70eb67-290e-462d-9c4a-b9b6adff35cb" (UID: "cc70eb67-290e-462d-9c4a-b9b6adff35cb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.291283 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-scripts" (OuterVolumeSpecName: "scripts") pod "495fd7a5-eeae-4ebd-8606-ea08a366864e" (UID: "495fd7a5-eeae-4ebd-8606-ea08a366864e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.291592 4765 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.291790 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc70eb67-290e-462d-9c4a-b9b6adff35cb-kube-api-access-dgzq4" (OuterVolumeSpecName: "kube-api-access-dgzq4") pod "cc70eb67-290e-462d-9c4a-b9b6adff35cb" (UID: "cc70eb67-290e-462d-9c4a-b9b6adff35cb"). InnerVolumeSpecName "kube-api-access-dgzq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.291936 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/495fd7a5-eeae-4ebd-8606-ea08a366864e-kube-api-access-s25jn" (OuterVolumeSpecName: "kube-api-access-s25jn") pod "495fd7a5-eeae-4ebd-8606-ea08a366864e" (UID: "495fd7a5-eeae-4ebd-8606-ea08a366864e"). InnerVolumeSpecName "kube-api-access-s25jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.293502 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "cc70eb67-290e-462d-9c4a-b9b6adff35cb" (UID: "cc70eb67-290e-462d-9c4a-b9b6adff35cb"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.293596 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "495fd7a5-eeae-4ebd-8606-ea08a366864e" (UID: "495fd7a5-eeae-4ebd-8606-ea08a366864e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.295402 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "495fd7a5-eeae-4ebd-8606-ea08a366864e" (UID: "495fd7a5-eeae-4ebd-8606-ea08a366864e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.300020 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-scripts" (OuterVolumeSpecName: "scripts") pod "cc70eb67-290e-462d-9c4a-b9b6adff35cb" (UID: "cc70eb67-290e-462d-9c4a-b9b6adff35cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.306844 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "495fd7a5-eeae-4ebd-8606-ea08a366864e" (UID: "495fd7a5-eeae-4ebd-8606-ea08a366864e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.307467 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-config-data" (OuterVolumeSpecName: "config-data") pod "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" (UID: "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.321513 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" (UID: "7ae9c0ce-3c26-40ee-82b4-eb2ad1503562"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.330112 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-config-data" (OuterVolumeSpecName: "config-data") pod "495fd7a5-eeae-4ebd-8606-ea08a366864e" (UID: "495fd7a5-eeae-4ebd-8606-ea08a366864e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.330125 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc70eb67-290e-462d-9c4a-b9b6adff35cb" (UID: "cc70eb67-290e-462d-9c4a-b9b6adff35cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.346966 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cc70eb67-290e-462d-9c4a-b9b6adff35cb" (UID: "cc70eb67-290e-462d-9c4a-b9b6adff35cb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.357995 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-config-data" (OuterVolumeSpecName: "config-data") pod "cc70eb67-290e-462d-9c4a-b9b6adff35cb" (UID: "cc70eb67-290e-462d-9c4a-b9b6adff35cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368158 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368192 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368241 4765 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368256 4765 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368267 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368275 4765 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc70eb67-290e-462d-9c4a-b9b6adff35cb-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368283 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368294 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368303 4765 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368318 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgzq4\" (UniqueName: \"kubernetes.io/projected/cc70eb67-290e-462d-9c4a-b9b6adff35cb-kube-api-access-dgzq4\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368333 4765 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368349 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368359 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s25jn\" (UniqueName: \"kubernetes.io/projected/495fd7a5-eeae-4ebd-8606-ea08a366864e-kube-api-access-s25jn\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368370 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495fd7a5-eeae-4ebd-8606-ea08a366864e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368382 4765 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc70eb67-290e-462d-9c4a-b9b6adff35cb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368392 4765 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368401 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc70eb67-290e-462d-9c4a-b9b6adff35cb-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.368409 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.384256 4765 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.470666 4765 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.633743 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cc70eb67-290e-462d-9c4a-b9b6adff35cb","Type":"ContainerDied","Data":"6ae93f668b4162c9f3aca3bce7645eebfe3904a72a0dc85064778d6b94b03ec1"} Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.633828 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.645001 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.645435 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ae9c0ce-3c26-40ee-82b4-eb2ad1503562","Type":"ContainerDied","Data":"0e95bb3c05a315812fecdeaa5ef8d479f07505a984c18fc73031b7ce664de25b"} Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.650633 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g879q" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.650646 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g879q" event={"ID":"495fd7a5-eeae-4ebd-8606-ea08a366864e","Type":"ContainerDied","Data":"19ad8185cec84f9fb500fa1ba1c01c587ae984332198e8077a909241d3988109"} Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.650714 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19ad8185cec84f9fb500fa1ba1c01c587ae984332198e8077a909241d3988109" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.732301 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.757944 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.765532 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 13:22:57 crc kubenswrapper[4765]: E0121 13:22:57.766013 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495fd7a5-eeae-4ebd-8606-ea08a366864e" containerName="keystone-bootstrap" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.766038 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="495fd7a5-eeae-4ebd-8606-ea08a366864e" containerName="keystone-bootstrap" Jan 21 13:22:57 crc kubenswrapper[4765]: E0121 13:22:57.766051 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" containerName="glance-httpd" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.766060 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" containerName="glance-httpd" Jan 21 13:22:57 crc kubenswrapper[4765]: E0121 13:22:57.766082 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" containerName="glance-log" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.766090 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" containerName="glance-log" Jan 21 13:22:57 crc kubenswrapper[4765]: E0121 13:22:57.766108 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc70eb67-290e-462d-9c4a-b9b6adff35cb" containerName="glance-httpd" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.766116 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc70eb67-290e-462d-9c4a-b9b6adff35cb" containerName="glance-httpd" Jan 21 13:22:57 crc kubenswrapper[4765]: E0121 13:22:57.766138 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc70eb67-290e-462d-9c4a-b9b6adff35cb" containerName="glance-log" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.766146 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc70eb67-290e-462d-9c4a-b9b6adff35cb" containerName="glance-log" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.766417 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" containerName="glance-log" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.766443 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="495fd7a5-eeae-4ebd-8606-ea08a366864e" containerName="keystone-bootstrap" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.766462 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" containerName="glance-httpd" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.766476 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc70eb67-290e-462d-9c4a-b9b6adff35cb" containerName="glance-log" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.766490 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc70eb67-290e-462d-9c4a-b9b6adff35cb" containerName="glance-httpd" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.767656 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.769391 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8hh29" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.772919 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.772920 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.772980 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.798885 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.880250 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.892366 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4233242d-f981-4e9c-b8d0-0ea546d328c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.892445 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.892481 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.892559 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.892620 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4233242d-f981-4e9c-b8d0-0ea546d328c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.892643 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.892687 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm66f\" (UniqueName: \"kubernetes.io/projected/4233242d-f981-4e9c-b8d0-0ea546d328c3-kube-api-access-sm66f\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.892732 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.900764 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.919963 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.921863 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.926235 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.927011 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.962286 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.997777 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.997873 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/677ee428-97c3-4ee7-a68b-8eb406f5734c-logs\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.998017 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.998044 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4233242d-f981-4e9c-b8d0-0ea546d328c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.998086 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.998130 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/677ee428-97c3-4ee7-a68b-8eb406f5734c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.998176 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.998237 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm66f\" (UniqueName: \"kubernetes.io/projected/4233242d-f981-4e9c-b8d0-0ea546d328c3-kube-api-access-sm66f\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.998287 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:57 crc kubenswrapper[4765]: I0121 13:22:57.998696 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:57.998957 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4233242d-f981-4e9c-b8d0-0ea546d328c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.000416 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.000543 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-config-data\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.000690 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-scripts\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.000774 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4233242d-f981-4e9c-b8d0-0ea546d328c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.000908 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mvjm\" (UniqueName: \"kubernetes.io/projected/677ee428-97c3-4ee7-a68b-8eb406f5734c-kube-api-access-7mvjm\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.000952 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.001026 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.001690 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4233242d-f981-4e9c-b8d0-0ea546d328c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.005740 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.006352 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.032578 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.038201 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.057655 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm66f\" (UniqueName: \"kubernetes.io/projected/4233242d-f981-4e9c-b8d0-0ea546d328c3-kube-api-access-sm66f\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.088838 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.103096 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/677ee428-97c3-4ee7-a68b-8eb406f5734c-logs\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.104249 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.104421 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/677ee428-97c3-4ee7-a68b-8eb406f5734c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.104584 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.104702 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.104778 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/677ee428-97c3-4ee7-a68b-8eb406f5734c-logs\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.104900 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-config-data\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.105075 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-scripts\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.105375 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mvjm\" (UniqueName: \"kubernetes.io/projected/677ee428-97c3-4ee7-a68b-8eb406f5734c-kube-api-access-7mvjm\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.106339 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/677ee428-97c3-4ee7-a68b-8eb406f5734c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.107882 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.108795 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.113691 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.113856 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-scripts\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.134528 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.152156 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.162987 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-config-data\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.175539 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mvjm\" (UniqueName: \"kubernetes.io/projected/677ee428-97c3-4ee7-a68b-8eb406f5734c-kube-api-access-7mvjm\") pod \"glance-default-external-api-0\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.225468 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.288171 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-g879q"] Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.302370 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-g879q"] Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.370478 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vftqw"] Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.371962 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.376887 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.377334 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.377399 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.377643 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.377851 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w98g4" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.393775 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vftqw"] Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.416429 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-config-data\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.416491 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-combined-ca-bundle\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.416547 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-scripts\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.416626 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-fernet-keys\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.416646 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-credential-keys\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.416689 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzdvp\" (UniqueName: \"kubernetes.io/projected/5e10ec1e-60c7-497a-bd8f-710c01db5b28-kube-api-access-pzdvp\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.517569 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-config-data\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.517625 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-combined-ca-bundle\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.517665 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-scripts\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.517726 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-fernet-keys\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.517745 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-credential-keys\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.517785 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzdvp\" (UniqueName: \"kubernetes.io/projected/5e10ec1e-60c7-497a-bd8f-710c01db5b28-kube-api-access-pzdvp\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.521838 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-credential-keys\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.522606 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-combined-ca-bundle\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.525098 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-config-data\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.525631 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-fernet-keys\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.526450 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-scripts\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.535575 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzdvp\" (UniqueName: \"kubernetes.io/projected/5e10ec1e-60c7-497a-bd8f-710c01db5b28-kube-api-access-pzdvp\") pod \"keystone-bootstrap-vftqw\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:58 crc kubenswrapper[4765]: I0121 13:22:58.690077 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:22:59 crc kubenswrapper[4765]: I0121 13:22:59.626300 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="495fd7a5-eeae-4ebd-8606-ea08a366864e" path="/var/lib/kubelet/pods/495fd7a5-eeae-4ebd-8606-ea08a366864e/volumes" Jan 21 13:22:59 crc kubenswrapper[4765]: I0121 13:22:59.627590 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae9c0ce-3c26-40ee-82b4-eb2ad1503562" path="/var/lib/kubelet/pods/7ae9c0ce-3c26-40ee-82b4-eb2ad1503562/volumes" Jan 21 13:22:59 crc kubenswrapper[4765]: I0121 13:22:59.628564 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc70eb67-290e-462d-9c4a-b9b6adff35cb" path="/var/lib/kubelet/pods/cc70eb67-290e-462d-9c4a-b9b6adff35cb/volumes" Jan 21 13:23:00 crc kubenswrapper[4765]: E0121 13:23:00.112568 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 21 13:23:00 crc kubenswrapper[4765]: E0121 13:23:00.112753 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cch87h54bh665hb4h646hd8h698h89hd7h54fh5bfhd9hd9h685h5d8h59h58fh65fh664h585hfch5f4h579h6fh76h655h5dbh679h54chfch686q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z57t9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78bb670d-da93-47aa-af39-981e6a9bff0f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:23:01 crc kubenswrapper[4765]: E0121 13:23:01.832485 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 21 13:23:01 crc kubenswrapper[4765]: E0121 13:23:01.833017 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtxn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-6m7js_openstack(92340e7a-b249-4701-8527-eacaf9ba1fd7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:23:01 crc kubenswrapper[4765]: E0121 13:23:01.834197 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-6m7js" podUID="92340e7a-b249-4701-8527-eacaf9ba1fd7" Jan 21 13:23:01 crc kubenswrapper[4765]: E0121 13:23:01.839650 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 21 13:23:01 crc kubenswrapper[4765]: E0121 13:23:01.839819 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n54fh574hc6h68dhcbh569h94h67dh558h547h658hcdh57fh658h5bfhbbh5cfhch587hbfhd9hfh67h76hbfhb4h5fh5b9h5b7h579hdch695q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvrpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-79fdbccc5f-584ld_openstack(1d1e05de-5888-4222-bf1f-1a27d64ff49c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:23:01 crc kubenswrapper[4765]: E0121 13:23:01.843277 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-79fdbccc5f-584ld" podUID="1d1e05de-5888-4222-bf1f-1a27d64ff49c" Jan 21 13:23:02 crc kubenswrapper[4765]: E0121 13:23:02.703945 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-6m7js" podUID="92340e7a-b249-4701-8527-eacaf9ba1fd7" Jan 21 13:23:05 crc kubenswrapper[4765]: I0121 13:23:05.725458 4765 generic.go:334] "Generic (PLEG): container finished" podID="2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5" containerID="55a7211486ad090d246dd116d0b0b13604208a9504841e230e9d04aabbf7b482" exitCode=0 Jan 21 13:23:05 crc kubenswrapper[4765]: I0121 13:23:05.725568 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g87wz" event={"ID":"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5","Type":"ContainerDied","Data":"55a7211486ad090d246dd116d0b0b13604208a9504841e230e9d04aabbf7b482"} Jan 21 13:23:06 crc kubenswrapper[4765]: I0121 13:23:06.492997 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" podUID="121a128d-b52a-4cb6-a62c-34380823877c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 21 13:23:11 crc kubenswrapper[4765]: I0121 13:23:11.494100 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" podUID="121a128d-b52a-4cb6-a62c-34380823877c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.318429 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.333486 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.349306 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.351371 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.357190 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g87wz" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.379677 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-logs\") pod \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.379781 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-horizon-secret-key\") pod \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.379824 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-scripts\") pod \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.379851 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-config-data\") pod \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.379901 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86fjf\" (UniqueName: \"kubernetes.io/projected/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-kube-api-access-86fjf\") pod \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\" (UID: \"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.380614 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-scripts" (OuterVolumeSpecName: "scripts") pod "e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca" (UID: "e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.380693 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-config-data" (OuterVolumeSpecName: "config-data") pod "e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca" (UID: "e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.381185 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-logs" (OuterVolumeSpecName: "logs") pod "e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca" (UID: "e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.384853 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca" (UID: "e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.390163 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-kube-api-access-86fjf" (OuterVolumeSpecName: "kube-api-access-86fjf") pod "e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca" (UID: "e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca"). InnerVolumeSpecName "kube-api-access-86fjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.481077 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvrpm\" (UniqueName: \"kubernetes.io/projected/1d1e05de-5888-4222-bf1f-1a27d64ff49c-kube-api-access-jvrpm\") pod \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.481138 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-config\") pod \"121a128d-b52a-4cb6-a62c-34380823877c\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.481183 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-dns-svc\") pod \"121a128d-b52a-4cb6-a62c-34380823877c\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.481254 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2k46\" (UniqueName: \"kubernetes.io/projected/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-kube-api-access-b2k46\") pod \"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5\" (UID: \"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.481277 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-combined-ca-bundle\") pod \"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5\" (UID: \"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.481302 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/784cf761-2eea-4807-b2bd-94d7dcddecc2-scripts\") pod \"784cf761-2eea-4807-b2bd-94d7dcddecc2\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.481321 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whfwx\" (UniqueName: \"kubernetes.io/projected/121a128d-b52a-4cb6-a62c-34380823877c-kube-api-access-whfwx\") pod \"121a128d-b52a-4cb6-a62c-34380823877c\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.481341 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-ovsdbserver-sb\") pod \"121a128d-b52a-4cb6-a62c-34380823877c\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.481358 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-config\") pod \"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5\" (UID: \"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.481380 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbsf7\" (UniqueName: \"kubernetes.io/projected/784cf761-2eea-4807-b2bd-94d7dcddecc2-kube-api-access-nbsf7\") pod \"784cf761-2eea-4807-b2bd-94d7dcddecc2\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.481398 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d1e05de-5888-4222-bf1f-1a27d64ff49c-config-data\") pod \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.481437 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-dns-swift-storage-0\") pod \"121a128d-b52a-4cb6-a62c-34380823877c\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.481953 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/784cf761-2eea-4807-b2bd-94d7dcddecc2-scripts" (OuterVolumeSpecName: "scripts") pod "784cf761-2eea-4807-b2bd-94d7dcddecc2" (UID: "784cf761-2eea-4807-b2bd-94d7dcddecc2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.482764 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d1e05de-5888-4222-bf1f-1a27d64ff49c-logs\") pod \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.482800 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d1e05de-5888-4222-bf1f-1a27d64ff49c-horizon-secret-key\") pod \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.482833 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/784cf761-2eea-4807-b2bd-94d7dcddecc2-horizon-secret-key\") pod \"784cf761-2eea-4807-b2bd-94d7dcddecc2\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.482865 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-ovsdbserver-nb\") pod \"121a128d-b52a-4cb6-a62c-34380823877c\" (UID: \"121a128d-b52a-4cb6-a62c-34380823877c\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.482902 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/784cf761-2eea-4807-b2bd-94d7dcddecc2-logs\") pod \"784cf761-2eea-4807-b2bd-94d7dcddecc2\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.482939 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/784cf761-2eea-4807-b2bd-94d7dcddecc2-config-data\") pod \"784cf761-2eea-4807-b2bd-94d7dcddecc2\" (UID: \"784cf761-2eea-4807-b2bd-94d7dcddecc2\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.482977 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d1e05de-5888-4222-bf1f-1a27d64ff49c-scripts\") pod \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\" (UID: \"1d1e05de-5888-4222-bf1f-1a27d64ff49c\") " Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.483496 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.483511 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.483520 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86fjf\" (UniqueName: \"kubernetes.io/projected/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-kube-api-access-86fjf\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.483531 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/784cf761-2eea-4807-b2bd-94d7dcddecc2-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.483539 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.483547 4765 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.484406 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1e05de-5888-4222-bf1f-1a27d64ff49c-scripts" (OuterVolumeSpecName: "scripts") pod "1d1e05de-5888-4222-bf1f-1a27d64ff49c" (UID: "1d1e05de-5888-4222-bf1f-1a27d64ff49c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.484790 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d1e05de-5888-4222-bf1f-1a27d64ff49c-kube-api-access-jvrpm" (OuterVolumeSpecName: "kube-api-access-jvrpm") pod "1d1e05de-5888-4222-bf1f-1a27d64ff49c" (UID: "1d1e05de-5888-4222-bf1f-1a27d64ff49c"). InnerVolumeSpecName "kube-api-access-jvrpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.485058 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d1e05de-5888-4222-bf1f-1a27d64ff49c-logs" (OuterVolumeSpecName: "logs") pod "1d1e05de-5888-4222-bf1f-1a27d64ff49c" (UID: "1d1e05de-5888-4222-bf1f-1a27d64ff49c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.488739 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/784cf761-2eea-4807-b2bd-94d7dcddecc2-logs" (OuterVolumeSpecName: "logs") pod "784cf761-2eea-4807-b2bd-94d7dcddecc2" (UID: "784cf761-2eea-4807-b2bd-94d7dcddecc2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.489264 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/784cf761-2eea-4807-b2bd-94d7dcddecc2-config-data" (OuterVolumeSpecName: "config-data") pod "784cf761-2eea-4807-b2bd-94d7dcddecc2" (UID: "784cf761-2eea-4807-b2bd-94d7dcddecc2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.490005 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/121a128d-b52a-4cb6-a62c-34380823877c-kube-api-access-whfwx" (OuterVolumeSpecName: "kube-api-access-whfwx") pod "121a128d-b52a-4cb6-a62c-34380823877c" (UID: "121a128d-b52a-4cb6-a62c-34380823877c"). InnerVolumeSpecName "kube-api-access-whfwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.493886 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d1e05de-5888-4222-bf1f-1a27d64ff49c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1d1e05de-5888-4222-bf1f-1a27d64ff49c" (UID: "1d1e05de-5888-4222-bf1f-1a27d64ff49c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.494398 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-kube-api-access-b2k46" (OuterVolumeSpecName: "kube-api-access-b2k46") pod "2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5" (UID: "2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5"). InnerVolumeSpecName "kube-api-access-b2k46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.495577 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1e05de-5888-4222-bf1f-1a27d64ff49c-config-data" (OuterVolumeSpecName: "config-data") pod "1d1e05de-5888-4222-bf1f-1a27d64ff49c" (UID: "1d1e05de-5888-4222-bf1f-1a27d64ff49c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.496941 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784cf761-2eea-4807-b2bd-94d7dcddecc2-kube-api-access-nbsf7" (OuterVolumeSpecName: "kube-api-access-nbsf7") pod "784cf761-2eea-4807-b2bd-94d7dcddecc2" (UID: "784cf761-2eea-4807-b2bd-94d7dcddecc2"). InnerVolumeSpecName "kube-api-access-nbsf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.500465 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/784cf761-2eea-4807-b2bd-94d7dcddecc2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "784cf761-2eea-4807-b2bd-94d7dcddecc2" (UID: "784cf761-2eea-4807-b2bd-94d7dcddecc2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.531295 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5" (UID: "2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.543589 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-config" (OuterVolumeSpecName: "config") pod "2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5" (UID: "2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.545314 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "121a128d-b52a-4cb6-a62c-34380823877c" (UID: "121a128d-b52a-4cb6-a62c-34380823877c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.546413 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "121a128d-b52a-4cb6-a62c-34380823877c" (UID: "121a128d-b52a-4cb6-a62c-34380823877c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.548757 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "121a128d-b52a-4cb6-a62c-34380823877c" (UID: "121a128d-b52a-4cb6-a62c-34380823877c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.557413 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-config" (OuterVolumeSpecName: "config") pod "121a128d-b52a-4cb6-a62c-34380823877c" (UID: "121a128d-b52a-4cb6-a62c-34380823877c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.558854 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "121a128d-b52a-4cb6-a62c-34380823877c" (UID: "121a128d-b52a-4cb6-a62c-34380823877c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.585542 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.586804 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2k46\" (UniqueName: \"kubernetes.io/projected/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-kube-api-access-b2k46\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.586897 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.586966 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whfwx\" (UniqueName: \"kubernetes.io/projected/121a128d-b52a-4cb6-a62c-34380823877c-kube-api-access-whfwx\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.587022 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.587071 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.587144 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbsf7\" (UniqueName: \"kubernetes.io/projected/784cf761-2eea-4807-b2bd-94d7dcddecc2-kube-api-access-nbsf7\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.587217 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d1e05de-5888-4222-bf1f-1a27d64ff49c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.587271 4765 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.587342 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d1e05de-5888-4222-bf1f-1a27d64ff49c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.587678 4765 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d1e05de-5888-4222-bf1f-1a27d64ff49c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.587738 4765 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/784cf761-2eea-4807-b2bd-94d7dcddecc2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.587800 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.587854 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/784cf761-2eea-4807-b2bd-94d7dcddecc2-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.587915 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/784cf761-2eea-4807-b2bd-94d7dcddecc2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.587968 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d1e05de-5888-4222-bf1f-1a27d64ff49c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.588020 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvrpm\" (UniqueName: \"kubernetes.io/projected/1d1e05de-5888-4222-bf1f-1a27d64ff49c-kube-api-access-jvrpm\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.588070 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121a128d-b52a-4cb6-a62c-34380823877c-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.813786 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-548d6bd7d9-2w72v" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.814691 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-548d6bd7d9-2w72v" event={"ID":"e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca","Type":"ContainerDied","Data":"324ca630edf28f4579aee26fe3ca46390858f524bba01905f37824475cbf149c"} Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.821700 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g87wz" event={"ID":"2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5","Type":"ContainerDied","Data":"fa3af29394628990df3578e8688fc8e46fae8fbfb260467fc9bd6078935cf0c6"} Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.821739 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa3af29394628990df3578e8688fc8e46fae8fbfb260467fc9bd6078935cf0c6" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.821762 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g87wz" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.833740 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.833779 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" event={"ID":"121a128d-b52a-4cb6-a62c-34380823877c","Type":"ContainerDied","Data":"93cc0a8577571c0f8fd2e4fcc0b67cf6a0dde4bfaf2d5d89422bda22d77efd61"} Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.837135 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d68b4fcfc-tnw87" event={"ID":"784cf761-2eea-4807-b2bd-94d7dcddecc2","Type":"ContainerDied","Data":"eb6929c849be68783d0bf31343202d0b2634ec0418b75574f5ab28df86ac35c3"} Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.837200 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d68b4fcfc-tnw87" Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.864426 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79fdbccc5f-584ld" event={"ID":"1d1e05de-5888-4222-bf1f-1a27d64ff49c","Type":"ContainerDied","Data":"8ab365676adc16db79401d8e412a879dacdd0263d5c31c54520416d796497b30"} Jan 21 13:23:14 crc kubenswrapper[4765]: I0121 13:23:14.864709 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79fdbccc5f-584ld" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.003291 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-548d6bd7d9-2w72v"] Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.011499 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-548d6bd7d9-2w72v"] Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.034258 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-w9btc"] Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.063275 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-w9btc"] Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.097578 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79fdbccc5f-584ld"] Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.117056 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-79fdbccc5f-584ld"] Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.133264 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d68b4fcfc-tnw87"] Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.138073 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7d68b4fcfc-tnw87"] Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.588676 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-x5sp2"] Jan 21 13:23:15 crc kubenswrapper[4765]: E0121 13:23:15.589083 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5" containerName="neutron-db-sync" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.589095 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5" containerName="neutron-db-sync" Jan 21 13:23:15 crc kubenswrapper[4765]: E0121 13:23:15.589124 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="121a128d-b52a-4cb6-a62c-34380823877c" containerName="dnsmasq-dns" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.589130 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="121a128d-b52a-4cb6-a62c-34380823877c" containerName="dnsmasq-dns" Jan 21 13:23:15 crc kubenswrapper[4765]: E0121 13:23:15.589143 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="121a128d-b52a-4cb6-a62c-34380823877c" containerName="init" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.589148 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="121a128d-b52a-4cb6-a62c-34380823877c" containerName="init" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.589336 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="121a128d-b52a-4cb6-a62c-34380823877c" containerName="dnsmasq-dns" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.589351 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5" containerName="neutron-db-sync" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.590188 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.594144 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-x5sp2"] Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.657846 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="121a128d-b52a-4cb6-a62c-34380823877c" path="/var/lib/kubelet/pods/121a128d-b52a-4cb6-a62c-34380823877c/volumes" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.659193 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d1e05de-5888-4222-bf1f-1a27d64ff49c" path="/var/lib/kubelet/pods/1d1e05de-5888-4222-bf1f-1a27d64ff49c/volumes" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.660031 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="784cf761-2eea-4807-b2bd-94d7dcddecc2" path="/var/lib/kubelet/pods/784cf761-2eea-4807-b2bd-94d7dcddecc2/volumes" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.660533 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca" path="/var/lib/kubelet/pods/e0ff6c3e-fac2-4dbb-9ce8-a31b2019b1ca/volumes" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.718496 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.718555 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5gnt\" (UniqueName: \"kubernetes.io/projected/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-kube-api-access-q5gnt\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.718775 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-dns-svc\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.719377 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.719469 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-config\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.719656 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.742031 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-858874fc56-6kgbs"] Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.743592 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.747690 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.747916 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.748011 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-b4t8h" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.748039 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.753329 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-858874fc56-6kgbs"] Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.821040 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-ovndb-tls-certs\") pod \"neutron-858874fc56-6kgbs\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.821443 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.821476 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5gnt\" (UniqueName: \"kubernetes.io/projected/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-kube-api-access-q5gnt\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.821493 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frtt6\" (UniqueName: \"kubernetes.io/projected/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-kube-api-access-frtt6\") pod \"neutron-858874fc56-6kgbs\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.821530 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-dns-svc\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.821552 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.822432 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.822486 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-config\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.823002 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-dns-svc\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.823726 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.824280 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-config\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.824372 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-httpd-config\") pod \"neutron-858874fc56-6kgbs\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.824428 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-combined-ca-bundle\") pod \"neutron-858874fc56-6kgbs\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.824475 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-config\") pod \"neutron-858874fc56-6kgbs\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.824502 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.825102 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.859905 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5gnt\" (UniqueName: \"kubernetes.io/projected/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-kube-api-access-q5gnt\") pod \"dnsmasq-dns-6b7b667979-x5sp2\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.926481 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-httpd-config\") pod \"neutron-858874fc56-6kgbs\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.926557 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-combined-ca-bundle\") pod \"neutron-858874fc56-6kgbs\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.926609 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-config\") pod \"neutron-858874fc56-6kgbs\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.926663 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-ovndb-tls-certs\") pod \"neutron-858874fc56-6kgbs\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.926720 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frtt6\" (UniqueName: \"kubernetes.io/projected/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-kube-api-access-frtt6\") pod \"neutron-858874fc56-6kgbs\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.936047 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-httpd-config\") pod \"neutron-858874fc56-6kgbs\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.936700 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-combined-ca-bundle\") pod \"neutron-858874fc56-6kgbs\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.937145 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-config\") pod \"neutron-858874fc56-6kgbs\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.953276 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frtt6\" (UniqueName: \"kubernetes.io/projected/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-kube-api-access-frtt6\") pod \"neutron-858874fc56-6kgbs\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.957704 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:15 crc kubenswrapper[4765]: I0121 13:23:15.961095 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-ovndb-tls-certs\") pod \"neutron-858874fc56-6kgbs\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:16 crc kubenswrapper[4765]: I0121 13:23:16.069228 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:16 crc kubenswrapper[4765]: I0121 13:23:16.494885 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-w9btc" podUID="121a128d-b52a-4cb6-a62c-34380823877c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 21 13:23:16 crc kubenswrapper[4765]: E0121 13:23:16.855283 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 21 13:23:16 crc kubenswrapper[4765]: E0121 13:23:16.856648 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4kwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-v4h97_openstack(3f0ee201-f570-4414-9feb-616192dfca3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:23:16 crc kubenswrapper[4765]: E0121 13:23:16.857850 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-v4h97" podUID="3f0ee201-f570-4414-9feb-616192dfca3b" Jan 21 13:23:16 crc kubenswrapper[4765]: I0121 13:23:16.878705 4765 scope.go:117] "RemoveContainer" containerID="d87a2d5bee7860b1928d43ff257ff0b5eda0b79f5d9a0215393d76e18f608dc8" Jan 21 13:23:16 crc kubenswrapper[4765]: E0121 13:23:16.899929 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-v4h97" podUID="3f0ee201-f570-4414-9feb-616192dfca3b" Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.404807 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6558674dbd-lct5s"] Jan 21 13:23:17 crc kubenswrapper[4765]: E0121 13:23:17.458331 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified" Jan 21 13:23:17 crc kubenswrapper[4765]: E0121 13:23:17.458535 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cch87h54bh665hb4h646hd8h698h89hd7h54fh5bfhd9hd9h685h5d8h59h58fh65fh664h585hfch5f4h579h6fh76h655h5dbh679h54chfch686q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z57t9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78bb670d-da93-47aa-af39-981e6a9bff0f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.617312 4765 scope.go:117] "RemoveContainer" containerID="7d02961743e2b97069451ae25fdf08a23ac40542b27d1496cec26a583b46439c" Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.696358 4765 scope.go:117] "RemoveContainer" containerID="ff358a461fc9d904f53f1fd5b5c47e779a60fe4e0ea88829610b92b8783a135f" Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.869165 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77dcd8ffdf-64j8s"] Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.870766 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.880890 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.887876 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77dcd8ffdf-64j8s"] Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.896062 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.914370 4765 scope.go:117] "RemoveContainer" containerID="3ec5ced41fffab823682f1a854c5134ba3041bad17168cce3fa2065df5d319b9" Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.928167 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-combined-ca-bundle\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.928323 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-httpd-config\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.928372 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-ovndb-tls-certs\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.928403 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-public-tls-certs\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.928440 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wpwp\" (UniqueName: \"kubernetes.io/projected/d069b575-51e3-4f93-bff8-a1f0cb141797-kube-api-access-4wpwp\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.928466 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-config\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:17 crc kubenswrapper[4765]: I0121 13:23:17.928502 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-internal-tls-certs\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.006752 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6558674dbd-lct5s" event={"ID":"074ae613-bc7f-4443-abdb-7010b6054997","Type":"ContainerStarted","Data":"c94eb348928801014ccf9c915bc637093a81ba6b2b4e7703298b64c3fa0a3b4c"} Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.031685 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-combined-ca-bundle\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.031769 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-httpd-config\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.031810 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-ovndb-tls-certs\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.031861 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-public-tls-certs\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.031897 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wpwp\" (UniqueName: \"kubernetes.io/projected/d069b575-51e3-4f93-bff8-a1f0cb141797-kube-api-access-4wpwp\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.031916 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-config\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.031954 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-internal-tls-certs\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.084330 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-ovndb-tls-certs\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.104332 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-httpd-config\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.104631 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-config\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.107378 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-internal-tls-certs\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.112836 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wpwp\" (UniqueName: \"kubernetes.io/projected/d069b575-51e3-4f93-bff8-a1f0cb141797-kube-api-access-4wpwp\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.121725 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-combined-ca-bundle\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.122300 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d069b575-51e3-4f93-bff8-a1f0cb141797-public-tls-certs\") pod \"neutron-77dcd8ffdf-64j8s\" (UID: \"d069b575-51e3-4f93-bff8-a1f0cb141797\") " pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:18 crc kubenswrapper[4765]: I0121 13:23:18.217910 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:19 crc kubenswrapper[4765]: I0121 13:23:19.294433 4765 scope.go:117] "RemoveContainer" containerID="cc03b92e62ccb8d90084d7be2f638ce4c082aba4a211f71aec3bbfa9509605c1" Jan 21 13:23:19 crc kubenswrapper[4765]: I0121 13:23:19.450754 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86c57777f6-gqpgv"] Jan 21 13:23:19 crc kubenswrapper[4765]: I0121 13:23:19.514393 4765 scope.go:117] "RemoveContainer" containerID="a2159a93e0ff58a5d38b86b179e0f07cf7b50f2e5912dcb6a46bc3cd021448e1" Jan 21 13:23:19 crc kubenswrapper[4765]: I0121 13:23:19.524253 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 13:23:19 crc kubenswrapper[4765]: I0121 13:23:19.524513 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-x5sp2"] Jan 21 13:23:19 crc kubenswrapper[4765]: I0121 13:23:19.744521 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vftqw"] Jan 21 13:23:19 crc kubenswrapper[4765]: I0121 13:23:19.747772 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 13:23:20 crc kubenswrapper[4765]: I0121 13:23:20.194664 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4233242d-f981-4e9c-b8d0-0ea546d328c3","Type":"ContainerStarted","Data":"f37db62c2d962a968f605c6393f46c6dc0d081f3e98d0da4549f74950b192a97"} Jan 21 13:23:20 crc kubenswrapper[4765]: I0121 13:23:20.199880 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-64bhp" event={"ID":"e7141df0-548e-4699-8620-4d85ba1b1218","Type":"ContainerStarted","Data":"47a49ca207f9a4bf97ff3c48d8898c3431b17fb94402ff80fbbb1c0681a6404a"} Jan 21 13:23:20 crc kubenswrapper[4765]: I0121 13:23:20.208799 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"6c509a513e1ebf6d2d06160d429b88c481004be78e418699ef3864eb908e3f4c"} Jan 21 13:23:20 crc kubenswrapper[4765]: I0121 13:23:20.212657 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" event={"ID":"87f9269a-2c20-4132-8f2c-c5e8c7493fc9","Type":"ContainerStarted","Data":"4172b9d63e26ba467db14305e76bcd3e8f8c6ddbf765821735dd60a1c1b06bc3"} Jan 21 13:23:20 crc kubenswrapper[4765]: I0121 13:23:20.222435 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-64bhp" podStartSLOduration=8.854506255 podStartE2EDuration="47.222411578s" podCreationTimestamp="2026-01-21 13:22:33 +0000 UTC" firstStartedPulling="2026-01-21 13:22:35.848706434 +0000 UTC m=+1216.866432256" lastFinishedPulling="2026-01-21 13:23:14.216611757 +0000 UTC m=+1255.234337579" observedRunningTime="2026-01-21 13:23:20.219399771 +0000 UTC m=+1261.237125593" watchObservedRunningTime="2026-01-21 13:23:20.222411578 +0000 UTC m=+1261.240137400" Jan 21 13:23:20 crc kubenswrapper[4765]: I0121 13:23:20.229328 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6m7js" event={"ID":"92340e7a-b249-4701-8527-eacaf9ba1fd7","Type":"ContainerStarted","Data":"236b62ff57085023bfd7faa978709fe7c1cf5b565a052ca93f0b06f9405fda16"} Jan 21 13:23:20 crc kubenswrapper[4765]: I0121 13:23:20.239916 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c57777f6-gqpgv" event={"ID":"1241b1f0-34c1-401a-b91f-13b72926cc2c","Type":"ContainerStarted","Data":"ef8720b3aed4a4636741edbebe5b34935d53962337bb9a9ed620be0a777468bc"} Jan 21 13:23:20 crc kubenswrapper[4765]: I0121 13:23:20.272468 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vftqw" event={"ID":"5e10ec1e-60c7-497a-bd8f-710c01db5b28","Type":"ContainerStarted","Data":"905ed9ad428eb9f94f056ae45d1bd5b5ac098154fdac9931ef5b24adc55b1669"} Jan 21 13:23:20 crc kubenswrapper[4765]: I0121 13:23:20.276617 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-6m7js" podStartSLOduration=5.775582549 podStartE2EDuration="47.276593689s" podCreationTimestamp="2026-01-21 13:22:33 +0000 UTC" firstStartedPulling="2026-01-21 13:22:36.12947451 +0000 UTC m=+1217.147200332" lastFinishedPulling="2026-01-21 13:23:17.63048565 +0000 UTC m=+1258.648211472" observedRunningTime="2026-01-21 13:23:20.26876097 +0000 UTC m=+1261.286486792" watchObservedRunningTime="2026-01-21 13:23:20.276593689 +0000 UTC m=+1261.294319511" Jan 21 13:23:20 crc kubenswrapper[4765]: I0121 13:23:20.344942 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77dcd8ffdf-64j8s"] Jan 21 13:23:20 crc kubenswrapper[4765]: I0121 13:23:20.484732 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 13:23:20 crc kubenswrapper[4765]: I0121 13:23:20.576523 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-858874fc56-6kgbs"] Jan 21 13:23:21 crc kubenswrapper[4765]: I0121 13:23:21.292656 4765 generic.go:334] "Generic (PLEG): container finished" podID="87f9269a-2c20-4132-8f2c-c5e8c7493fc9" containerID="cca2d583fed06e06f35eb259ace9a823748a14ecf0a7e29057f5b338721b0ad4" exitCode=0 Jan 21 13:23:21 crc kubenswrapper[4765]: I0121 13:23:21.293254 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" event={"ID":"87f9269a-2c20-4132-8f2c-c5e8c7493fc9","Type":"ContainerDied","Data":"cca2d583fed06e06f35eb259ace9a823748a14ecf0a7e29057f5b338721b0ad4"} Jan 21 13:23:21 crc kubenswrapper[4765]: I0121 13:23:21.302103 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4233242d-f981-4e9c-b8d0-0ea546d328c3","Type":"ContainerStarted","Data":"575d61fed73b271b7bc2060c0acb686a2fc3e396b6c9ea37e6ed4335c84091a0"} Jan 21 13:23:21 crc kubenswrapper[4765]: I0121 13:23:21.313511 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"677ee428-97c3-4ee7-a68b-8eb406f5734c","Type":"ContainerStarted","Data":"fe17f4df94f2f887a7d85e2418b552b10689688148162c3ddc05ad0c2cb84da3"} Jan 21 13:23:21 crc kubenswrapper[4765]: I0121 13:23:21.333243 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77dcd8ffdf-64j8s" event={"ID":"d069b575-51e3-4f93-bff8-a1f0cb141797","Type":"ContainerStarted","Data":"ee080b5e2d91fd865a690de5651465b09f87d949f88125ad213bfdf1aea50de4"} Jan 21 13:23:21 crc kubenswrapper[4765]: I0121 13:23:21.333286 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77dcd8ffdf-64j8s" event={"ID":"d069b575-51e3-4f93-bff8-a1f0cb141797","Type":"ContainerStarted","Data":"8763ab9ad4ff135b79de4f62b4008a376fcb11cd0b759650b481d0decfbaf3df"} Jan 21 13:23:21 crc kubenswrapper[4765]: I0121 13:23:21.355543 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vftqw" event={"ID":"5e10ec1e-60c7-497a-bd8f-710c01db5b28","Type":"ContainerStarted","Data":"2754f20aa9da36d9d9ca96314b11447ddef71b04750531d794dad3815e7e58e3"} Jan 21 13:23:21 crc kubenswrapper[4765]: I0121 13:23:21.365265 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-858874fc56-6kgbs" event={"ID":"06ec9aac-7fc7-4070-bfc1-a23f1a27060a","Type":"ContainerStarted","Data":"c2cc1c8c9784782125e37e34ddd38da385ce68363964beb236e4a274e1b53cc1"} Jan 21 13:23:21 crc kubenswrapper[4765]: I0121 13:23:21.365310 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-858874fc56-6kgbs" event={"ID":"06ec9aac-7fc7-4070-bfc1-a23f1a27060a","Type":"ContainerStarted","Data":"9e524b8e8c1d8438b0d306db66762339234756ca942cb5dc50a8a480a7216cf3"} Jan 21 13:23:21 crc kubenswrapper[4765]: I0121 13:23:21.378049 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6558674dbd-lct5s" event={"ID":"074ae613-bc7f-4443-abdb-7010b6054997","Type":"ContainerStarted","Data":"80bf8f8075aaafb1737281da7be1eba64cc3312c18d9db5a1ce9e20ad270bd85"} Jan 21 13:23:21 crc kubenswrapper[4765]: I0121 13:23:21.417080 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vftqw" podStartSLOduration=23.417061793 podStartE2EDuration="23.417061793s" podCreationTimestamp="2026-01-21 13:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:23:21.375864652 +0000 UTC m=+1262.393590474" watchObservedRunningTime="2026-01-21 13:23:21.417061793 +0000 UTC m=+1262.434787615" Jan 21 13:23:22 crc kubenswrapper[4765]: I0121 13:23:22.400726 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c57777f6-gqpgv" event={"ID":"1241b1f0-34c1-401a-b91f-13b72926cc2c","Type":"ContainerStarted","Data":"2bbb372ad03a36f2bfe943352f8bbd5b5c9e4e5cebf1367bcd139391ad4a406f"} Jan 21 13:23:22 crc kubenswrapper[4765]: I0121 13:23:22.403780 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77dcd8ffdf-64j8s" event={"ID":"d069b575-51e3-4f93-bff8-a1f0cb141797","Type":"ContainerStarted","Data":"209e6a7a55f02ed9dc00388238decff097b8ee7aeca983f555abcc412aa9ecc4"} Jan 21 13:23:22 crc kubenswrapper[4765]: I0121 13:23:22.405280 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:22 crc kubenswrapper[4765]: I0121 13:23:22.410874 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6558674dbd-lct5s" event={"ID":"074ae613-bc7f-4443-abdb-7010b6054997","Type":"ContainerStarted","Data":"aa436e74a6fd1c1c3a4ed7348015c8f931d8a51210c3f7b94c4c01885524ce52"} Jan 21 13:23:22 crc kubenswrapper[4765]: I0121 13:23:22.457373 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77dcd8ffdf-64j8s" podStartSLOduration=5.457354775 podStartE2EDuration="5.457354775s" podCreationTimestamp="2026-01-21 13:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:23:22.454618205 +0000 UTC m=+1263.472344027" watchObservedRunningTime="2026-01-21 13:23:22.457354775 +0000 UTC m=+1263.475080597" Jan 21 13:23:22 crc kubenswrapper[4765]: I0121 13:23:22.498674 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6558674dbd-lct5s" podStartSLOduration=38.467071944 podStartE2EDuration="40.498656819s" podCreationTimestamp="2026-01-21 13:22:42 +0000 UTC" firstStartedPulling="2026-01-21 13:23:17.541225276 +0000 UTC m=+1258.558951098" lastFinishedPulling="2026-01-21 13:23:19.572810151 +0000 UTC m=+1260.590535973" observedRunningTime="2026-01-21 13:23:22.497696551 +0000 UTC m=+1263.515422363" watchObservedRunningTime="2026-01-21 13:23:22.498656819 +0000 UTC m=+1263.516382641" Jan 21 13:23:23 crc kubenswrapper[4765]: I0121 13:23:23.280533 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:23:23 crc kubenswrapper[4765]: I0121 13:23:23.281617 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:23:23 crc kubenswrapper[4765]: I0121 13:23:23.446946 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c57777f6-gqpgv" event={"ID":"1241b1f0-34c1-401a-b91f-13b72926cc2c","Type":"ContainerStarted","Data":"46f1a7c9396eca5402ea7a2319db77d5ead07a4127c2f33dffbb8adc136e01da"} Jan 21 13:23:23 crc kubenswrapper[4765]: I0121 13:23:23.452359 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-858874fc56-6kgbs" event={"ID":"06ec9aac-7fc7-4070-bfc1-a23f1a27060a","Type":"ContainerStarted","Data":"f147b57d7bb3d9a984b44d1d501cab848a2b423001fc765a7195550a05e30cf9"} Jan 21 13:23:23 crc kubenswrapper[4765]: I0121 13:23:23.452643 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:23 crc kubenswrapper[4765]: I0121 13:23:23.467792 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" event={"ID":"87f9269a-2c20-4132-8f2c-c5e8c7493fc9","Type":"ContainerStarted","Data":"f8a73ccbe593ba79f289915182b7e1741421333934829e5bb29ff7ccc180175a"} Jan 21 13:23:23 crc kubenswrapper[4765]: I0121 13:23:23.468263 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:23 crc kubenswrapper[4765]: I0121 13:23:23.473771 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4233242d-f981-4e9c-b8d0-0ea546d328c3","Type":"ContainerStarted","Data":"1a3a1b538e1d9a2858b08c392c47ea3cd1b8949624df5522dffd4885d438a96e"} Jan 21 13:23:23 crc kubenswrapper[4765]: I0121 13:23:23.483654 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-86c57777f6-gqpgv" podStartSLOduration=38.932733102 podStartE2EDuration="40.483626938s" podCreationTimestamp="2026-01-21 13:22:43 +0000 UTC" firstStartedPulling="2026-01-21 13:23:19.440156692 +0000 UTC m=+1260.457882524" lastFinishedPulling="2026-01-21 13:23:20.991050528 +0000 UTC m=+1262.008776360" observedRunningTime="2026-01-21 13:23:23.479620302 +0000 UTC m=+1264.497346124" watchObservedRunningTime="2026-01-21 13:23:23.483626938 +0000 UTC m=+1264.501352760" Jan 21 13:23:23 crc kubenswrapper[4765]: I0121 13:23:23.486722 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"677ee428-97c3-4ee7-a68b-8eb406f5734c","Type":"ContainerStarted","Data":"16422fa6c406388ec2153cf2b8e8959c82de1c40e29b88eca0a98c498f8c8d3c"} Jan 21 13:23:23 crc kubenswrapper[4765]: I0121 13:23:23.492748 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"677ee428-97c3-4ee7-a68b-8eb406f5734c","Type":"ContainerStarted","Data":"b5f9ee99286880921762bb7f873930727cc8a13b80ef748b8ec83ef3e479a8b9"} Jan 21 13:23:23 crc kubenswrapper[4765]: I0121 13:23:23.512181 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" podStartSLOduration=8.512161821 podStartE2EDuration="8.512161821s" podCreationTimestamp="2026-01-21 13:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:23:23.502517739 +0000 UTC m=+1264.520243561" watchObservedRunningTime="2026-01-21 13:23:23.512161821 +0000 UTC m=+1264.529887643" Jan 21 13:23:23 crc kubenswrapper[4765]: I0121 13:23:23.527968 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=26.527946671 podStartE2EDuration="26.527946671s" podCreationTimestamp="2026-01-21 13:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:23:23.525966373 +0000 UTC m=+1264.543692205" watchObservedRunningTime="2026-01-21 13:23:23.527946671 +0000 UTC m=+1264.545672513" Jan 21 13:23:23 crc kubenswrapper[4765]: I0121 13:23:23.586239 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-858874fc56-6kgbs" podStartSLOduration=8.58618595 podStartE2EDuration="8.58618595s" podCreationTimestamp="2026-01-21 13:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:23:23.585359996 +0000 UTC m=+1264.603085828" watchObservedRunningTime="2026-01-21 13:23:23.58618595 +0000 UTC m=+1264.603911782" Jan 21 13:23:24 crc kubenswrapper[4765]: I0121 13:23:24.560663 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=27.560647482 podStartE2EDuration="27.560647482s" podCreationTimestamp="2026-01-21 13:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:23:24.557663465 +0000 UTC m=+1265.575389287" watchObservedRunningTime="2026-01-21 13:23:24.560647482 +0000 UTC m=+1265.578373294" Jan 21 13:23:27 crc kubenswrapper[4765]: I0121 13:23:27.580266 4765 generic.go:334] "Generic (PLEG): container finished" podID="92340e7a-b249-4701-8527-eacaf9ba1fd7" containerID="236b62ff57085023bfd7faa978709fe7c1cf5b565a052ca93f0b06f9405fda16" exitCode=0 Jan 21 13:23:27 crc kubenswrapper[4765]: I0121 13:23:27.580804 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6m7js" event={"ID":"92340e7a-b249-4701-8527-eacaf9ba1fd7","Type":"ContainerDied","Data":"236b62ff57085023bfd7faa978709fe7c1cf5b565a052ca93f0b06f9405fda16"} Jan 21 13:23:28 crc kubenswrapper[4765]: I0121 13:23:28.108965 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 13:23:28 crc kubenswrapper[4765]: I0121 13:23:28.109324 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 13:23:28 crc kubenswrapper[4765]: I0121 13:23:28.109355 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 13:23:28 crc kubenswrapper[4765]: I0121 13:23:28.109371 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 13:23:28 crc kubenswrapper[4765]: I0121 13:23:28.152449 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 13:23:28 crc kubenswrapper[4765]: I0121 13:23:28.164614 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 13:23:28 crc kubenswrapper[4765]: I0121 13:23:28.226279 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 13:23:28 crc kubenswrapper[4765]: I0121 13:23:28.226366 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 13:23:28 crc kubenswrapper[4765]: I0121 13:23:28.226387 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 13:23:28 crc kubenswrapper[4765]: I0121 13:23:28.226407 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 13:23:28 crc kubenswrapper[4765]: I0121 13:23:28.259745 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 13:23:28 crc kubenswrapper[4765]: I0121 13:23:28.284360 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 13:23:28 crc kubenswrapper[4765]: I0121 13:23:28.592755 4765 generic.go:334] "Generic (PLEG): container finished" podID="e7141df0-548e-4699-8620-4d85ba1b1218" containerID="47a49ca207f9a4bf97ff3c48d8898c3431b17fb94402ff80fbbb1c0681a6404a" exitCode=0 Jan 21 13:23:28 crc kubenswrapper[4765]: I0121 13:23:28.599458 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-64bhp" event={"ID":"e7141df0-548e-4699-8620-4d85ba1b1218","Type":"ContainerDied","Data":"47a49ca207f9a4bf97ff3c48d8898c3431b17fb94402ff80fbbb1c0681a6404a"} Jan 21 13:23:29 crc kubenswrapper[4765]: I0121 13:23:29.607728 4765 generic.go:334] "Generic (PLEG): container finished" podID="5e10ec1e-60c7-497a-bd8f-710c01db5b28" containerID="2754f20aa9da36d9d9ca96314b11447ddef71b04750531d794dad3815e7e58e3" exitCode=0 Jan 21 13:23:29 crc kubenswrapper[4765]: I0121 13:23:29.608835 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vftqw" event={"ID":"5e10ec1e-60c7-497a-bd8f-710c01db5b28","Type":"ContainerDied","Data":"2754f20aa9da36d9d9ca96314b11447ddef71b04750531d794dad3815e7e58e3"} Jan 21 13:23:30 crc kubenswrapper[4765]: I0121 13:23:30.960285 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:31 crc kubenswrapper[4765]: I0121 13:23:31.064770 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-cnm96"] Jan 21 13:23:31 crc kubenswrapper[4765]: I0121 13:23:31.065003 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" podUID="5d57e410-03d3-422a-ba44-f5a2ed1e1417" containerName="dnsmasq-dns" containerID="cri-o://571e112947afaefee85d3cb4e2b6d6d0f03f65661f6d1e8d32a2f104c267813d" gracePeriod=10 Jan 21 13:23:31 crc kubenswrapper[4765]: I0121 13:23:31.644736 4765 generic.go:334] "Generic (PLEG): container finished" podID="5d57e410-03d3-422a-ba44-f5a2ed1e1417" containerID="571e112947afaefee85d3cb4e2b6d6d0f03f65661f6d1e8d32a2f104c267813d" exitCode=0 Jan 21 13:23:31 crc kubenswrapper[4765]: I0121 13:23:31.644785 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" event={"ID":"5d57e410-03d3-422a-ba44-f5a2ed1e1417","Type":"ContainerDied","Data":"571e112947afaefee85d3cb4e2b6d6d0f03f65661f6d1e8d32a2f104c267813d"} Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.528148 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.532069 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-64bhp" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.558252 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6m7js" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.663705 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-combined-ca-bundle\") pod \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.663748 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7141df0-548e-4699-8620-4d85ba1b1218-combined-ca-bundle\") pod \"e7141df0-548e-4699-8620-4d85ba1b1218\" (UID: \"e7141df0-548e-4699-8620-4d85ba1b1218\") " Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.663778 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92340e7a-b249-4701-8527-eacaf9ba1fd7-logs\") pod \"92340e7a-b249-4701-8527-eacaf9ba1fd7\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.663797 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtxn6\" (UniqueName: \"kubernetes.io/projected/92340e7a-b249-4701-8527-eacaf9ba1fd7-kube-api-access-xtxn6\") pod \"92340e7a-b249-4701-8527-eacaf9ba1fd7\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.663844 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-scripts\") pod \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.663901 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-config-data\") pod \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.663963 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e7141df0-548e-4699-8620-4d85ba1b1218-db-sync-config-data\") pod \"e7141df0-548e-4699-8620-4d85ba1b1218\" (UID: \"e7141df0-548e-4699-8620-4d85ba1b1218\") " Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.664032 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-scripts\") pod \"92340e7a-b249-4701-8527-eacaf9ba1fd7\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.664050 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-credential-keys\") pod \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.664094 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-combined-ca-bundle\") pod \"92340e7a-b249-4701-8527-eacaf9ba1fd7\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.664137 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p24bt\" (UniqueName: \"kubernetes.io/projected/e7141df0-548e-4699-8620-4d85ba1b1218-kube-api-access-p24bt\") pod \"e7141df0-548e-4699-8620-4d85ba1b1218\" (UID: \"e7141df0-548e-4699-8620-4d85ba1b1218\") " Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.664169 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-config-data\") pod \"92340e7a-b249-4701-8527-eacaf9ba1fd7\" (UID: \"92340e7a-b249-4701-8527-eacaf9ba1fd7\") " Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.664274 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzdvp\" (UniqueName: \"kubernetes.io/projected/5e10ec1e-60c7-497a-bd8f-710c01db5b28-kube-api-access-pzdvp\") pod \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.664302 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-fernet-keys\") pod \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\" (UID: \"5e10ec1e-60c7-497a-bd8f-710c01db5b28\") " Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.669628 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-scripts" (OuterVolumeSpecName: "scripts") pod "92340e7a-b249-4701-8527-eacaf9ba1fd7" (UID: "92340e7a-b249-4701-8527-eacaf9ba1fd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.673958 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92340e7a-b249-4701-8527-eacaf9ba1fd7-logs" (OuterVolumeSpecName: "logs") pod "92340e7a-b249-4701-8527-eacaf9ba1fd7" (UID: "92340e7a-b249-4701-8527-eacaf9ba1fd7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.690470 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6m7js" event={"ID":"92340e7a-b249-4701-8527-eacaf9ba1fd7","Type":"ContainerDied","Data":"a08eac820b85d27ce549beef14e9add370113edd7ab615b3497dd20b1d0ff79a"} Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.690679 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a08eac820b85d27ce549beef14e9add370113edd7ab615b3497dd20b1d0ff79a" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.690789 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6m7js" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.696153 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-scripts" (OuterVolumeSpecName: "scripts") pod "5e10ec1e-60c7-497a-bd8f-710c01db5b28" (UID: "5e10ec1e-60c7-497a-bd8f-710c01db5b28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.696694 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7141df0-548e-4699-8620-4d85ba1b1218-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e7141df0-548e-4699-8620-4d85ba1b1218" (UID: "e7141df0-548e-4699-8620-4d85ba1b1218"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.699479 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92340e7a-b249-4701-8527-eacaf9ba1fd7-kube-api-access-xtxn6" (OuterVolumeSpecName: "kube-api-access-xtxn6") pod "92340e7a-b249-4701-8527-eacaf9ba1fd7" (UID: "92340e7a-b249-4701-8527-eacaf9ba1fd7"). InnerVolumeSpecName "kube-api-access-xtxn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.700501 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7141df0-548e-4699-8620-4d85ba1b1218-kube-api-access-p24bt" (OuterVolumeSpecName: "kube-api-access-p24bt") pod "e7141df0-548e-4699-8620-4d85ba1b1218" (UID: "e7141df0-548e-4699-8620-4d85ba1b1218"). InnerVolumeSpecName "kube-api-access-p24bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.700749 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vftqw" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.700752 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vftqw" event={"ID":"5e10ec1e-60c7-497a-bd8f-710c01db5b28","Type":"ContainerDied","Data":"905ed9ad428eb9f94f056ae45d1bd5b5ac098154fdac9931ef5b24adc55b1669"} Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.700790 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="905ed9ad428eb9f94f056ae45d1bd5b5ac098154fdac9931ef5b24adc55b1669" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.700869 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5e10ec1e-60c7-497a-bd8f-710c01db5b28" (UID: "5e10ec1e-60c7-497a-bd8f-710c01db5b28"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.710432 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e10ec1e-60c7-497a-bd8f-710c01db5b28-kube-api-access-pzdvp" (OuterVolumeSpecName: "kube-api-access-pzdvp") pod "5e10ec1e-60c7-497a-bd8f-710c01db5b28" (UID: "5e10ec1e-60c7-497a-bd8f-710c01db5b28"). InnerVolumeSpecName "kube-api-access-pzdvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.714313 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5e10ec1e-60c7-497a-bd8f-710c01db5b28" (UID: "5e10ec1e-60c7-497a-bd8f-710c01db5b28"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.725943 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7141df0-548e-4699-8620-4d85ba1b1218-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7141df0-548e-4699-8620-4d85ba1b1218" (UID: "e7141df0-548e-4699-8620-4d85ba1b1218"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.726132 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-64bhp" event={"ID":"e7141df0-548e-4699-8620-4d85ba1b1218","Type":"ContainerDied","Data":"08d3ad05ed11744317a12ff425bc0fd49a867967dc12bfd42415a3e51b2eaf7d"} Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.726163 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08d3ad05ed11744317a12ff425bc0fd49a867967dc12bfd42415a3e51b2eaf7d" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.726264 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-64bhp" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.726821 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-config-data" (OuterVolumeSpecName: "config-data") pod "5e10ec1e-60c7-497a-bd8f-710c01db5b28" (UID: "5e10ec1e-60c7-497a-bd8f-710c01db5b28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.727998 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92340e7a-b249-4701-8527-eacaf9ba1fd7" (UID: "92340e7a-b249-4701-8527-eacaf9ba1fd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.761576 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-config-data" (OuterVolumeSpecName: "config-data") pod "92340e7a-b249-4701-8527-eacaf9ba1fd7" (UID: "92340e7a-b249-4701-8527-eacaf9ba1fd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.768701 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p24bt\" (UniqueName: \"kubernetes.io/projected/e7141df0-548e-4699-8620-4d85ba1b1218-kube-api-access-p24bt\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.768724 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.768733 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzdvp\" (UniqueName: \"kubernetes.io/projected/5e10ec1e-60c7-497a-bd8f-710c01db5b28-kube-api-access-pzdvp\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.768742 4765 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.768770 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7141df0-548e-4699-8620-4d85ba1b1218-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.768782 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92340e7a-b249-4701-8527-eacaf9ba1fd7-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.768793 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtxn6\" (UniqueName: \"kubernetes.io/projected/92340e7a-b249-4701-8527-eacaf9ba1fd7-kube-api-access-xtxn6\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.768801 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.768809 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.768817 4765 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e7141df0-548e-4699-8620-4d85ba1b1218-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.768842 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.768852 4765 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.768860 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92340e7a-b249-4701-8527-eacaf9ba1fd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.781791 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e10ec1e-60c7-497a-bd8f-710c01db5b28" (UID: "5e10ec1e-60c7-497a-bd8f-710c01db5b28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.874379 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e10ec1e-60c7-497a-bd8f-710c01db5b28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:32 crc kubenswrapper[4765]: I0121 13:23:32.954521 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.077221 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-dns-swift-storage-0\") pod \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.077397 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-ovsdbserver-sb\") pod \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.077467 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-config\") pod \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.077527 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-ovsdbserver-nb\") pod \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.077550 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rl8b\" (UniqueName: \"kubernetes.io/projected/5d57e410-03d3-422a-ba44-f5a2ed1e1417-kube-api-access-8rl8b\") pod \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.077587 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-dns-svc\") pod \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\" (UID: \"5d57e410-03d3-422a-ba44-f5a2ed1e1417\") " Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.089466 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d57e410-03d3-422a-ba44-f5a2ed1e1417-kube-api-access-8rl8b" (OuterVolumeSpecName: "kube-api-access-8rl8b") pod "5d57e410-03d3-422a-ba44-f5a2ed1e1417" (UID: "5d57e410-03d3-422a-ba44-f5a2ed1e1417"). InnerVolumeSpecName "kube-api-access-8rl8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.161644 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-config" (OuterVolumeSpecName: "config") pod "5d57e410-03d3-422a-ba44-f5a2ed1e1417" (UID: "5d57e410-03d3-422a-ba44-f5a2ed1e1417"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.179385 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.179426 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rl8b\" (UniqueName: \"kubernetes.io/projected/5d57e410-03d3-422a-ba44-f5a2ed1e1417-kube-api-access-8rl8b\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.184955 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5d57e410-03d3-422a-ba44-f5a2ed1e1417" (UID: "5d57e410-03d3-422a-ba44-f5a2ed1e1417"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.187054 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5d57e410-03d3-422a-ba44-f5a2ed1e1417" (UID: "5d57e410-03d3-422a-ba44-f5a2ed1e1417"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.201813 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5d57e410-03d3-422a-ba44-f5a2ed1e1417" (UID: "5d57e410-03d3-422a-ba44-f5a2ed1e1417"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.218997 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5d57e410-03d3-422a-ba44-f5a2ed1e1417" (UID: "5d57e410-03d3-422a-ba44-f5a2ed1e1417"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.280946 4765 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.280982 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.280991 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.281002 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d57e410-03d3-422a-ba44-f5a2ed1e1417-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.287812 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.356606 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.356802 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.362759 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.372903 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.372995 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.377878 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.378031 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.397599 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c57777f6-gqpgv" podUID="1241b1f0-34c1-401a-b91f-13b72926cc2c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.398711 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.777470 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78bb670d-da93-47aa-af39-981e6a9bff0f","Type":"ContainerStarted","Data":"52a73a97a1ecdfd1ac850c202d1c5dceca451c63ca1727f3dbdb20e40b76e014"} Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.791931 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.792271 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-cnm96" event={"ID":"5d57e410-03d3-422a-ba44-f5a2ed1e1417","Type":"ContainerDied","Data":"15d9e92db3a6fea01aa4fffe3d9549a2cc4ffe7ec0aa98237fb934b6910d538f"} Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.792319 4765 scope.go:117] "RemoveContainer" containerID="571e112947afaefee85d3cb4e2b6d6d0f03f65661f6d1e8d32a2f104c267813d" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.812588 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7c5d9867cf-9ffzm"] Jan 21 13:23:33 crc kubenswrapper[4765]: E0121 13:23:33.812956 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7141df0-548e-4699-8620-4d85ba1b1218" containerName="barbican-db-sync" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.812968 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7141df0-548e-4699-8620-4d85ba1b1218" containerName="barbican-db-sync" Jan 21 13:23:33 crc kubenswrapper[4765]: E0121 13:23:33.812980 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d57e410-03d3-422a-ba44-f5a2ed1e1417" containerName="dnsmasq-dns" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.812986 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d57e410-03d3-422a-ba44-f5a2ed1e1417" containerName="dnsmasq-dns" Jan 21 13:23:33 crc kubenswrapper[4765]: E0121 13:23:33.813002 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d57e410-03d3-422a-ba44-f5a2ed1e1417" containerName="init" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.813008 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d57e410-03d3-422a-ba44-f5a2ed1e1417" containerName="init" Jan 21 13:23:33 crc kubenswrapper[4765]: E0121 13:23:33.813017 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92340e7a-b249-4701-8527-eacaf9ba1fd7" containerName="placement-db-sync" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.813022 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="92340e7a-b249-4701-8527-eacaf9ba1fd7" containerName="placement-db-sync" Jan 21 13:23:33 crc kubenswrapper[4765]: E0121 13:23:33.813032 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e10ec1e-60c7-497a-bd8f-710c01db5b28" containerName="keystone-bootstrap" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.813039 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e10ec1e-60c7-497a-bd8f-710c01db5b28" containerName="keystone-bootstrap" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.813237 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7141df0-548e-4699-8620-4d85ba1b1218" containerName="barbican-db-sync" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.813267 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="92340e7a-b249-4701-8527-eacaf9ba1fd7" containerName="placement-db-sync" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.813289 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e10ec1e-60c7-497a-bd8f-710c01db5b28" containerName="keystone-bootstrap" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.813298 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d57e410-03d3-422a-ba44-f5a2ed1e1417" containerName="dnsmasq-dns" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.813853 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.823187 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.823234 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.823391 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.823488 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.823520 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-w98g4" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.831307 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c5d9867cf-9ffzm"] Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.834435 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.849431 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-cnm96"] Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.862202 4765 scope.go:117] "RemoveContainer" containerID="cf840497ecd2ecf9bbaf1281ba69019067972d1472cc515d9f17d55a7f7a836c" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.867266 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-cnm96"] Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.931395 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-config-data\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.931475 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-credential-keys\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.931527 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w52rn\" (UniqueName: \"kubernetes.io/projected/80b18085-cc60-4891-bf22-0c8535624d5b-kube-api-access-w52rn\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.931557 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-public-tls-certs\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.931590 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-fernet-keys\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.931613 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-scripts\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.931638 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-combined-ca-bundle\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:33 crc kubenswrapper[4765]: I0121 13:23:33.931695 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-internal-tls-certs\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.002834 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-86cbcc788d-b897j"] Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.009620 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.017943 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.018131 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.018313 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fkgrt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.018386 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.018441 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.035116 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-credential-keys\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.035175 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w52rn\" (UniqueName: \"kubernetes.io/projected/80b18085-cc60-4891-bf22-0c8535624d5b-kube-api-access-w52rn\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.035201 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-public-tls-certs\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.035255 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-fernet-keys\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.035291 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-scripts\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.035336 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-combined-ca-bundle\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.035393 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-internal-tls-certs\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.035468 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-config-data\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.059078 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-internal-tls-certs\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.059938 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-fernet-keys\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.065632 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-86cbcc788d-b897j"] Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.066351 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-combined-ca-bundle\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.069056 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-public-tls-certs\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.069460 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-credential-keys\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.073689 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-config-data\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.073747 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80b18085-cc60-4891-bf22-0c8535624d5b-scripts\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.119137 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w52rn\" (UniqueName: \"kubernetes.io/projected/80b18085-cc60-4891-bf22-0c8535624d5b-kube-api-access-w52rn\") pod \"keystone-7c5d9867cf-9ffzm\" (UID: \"80b18085-cc60-4891-bf22-0c8535624d5b\") " pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.137552 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/369424ef-89f9-462a-80aa-6eb36049f6b5-combined-ca-bundle\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.137607 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/369424ef-89f9-462a-80aa-6eb36049f6b5-internal-tls-certs\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.137649 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/369424ef-89f9-462a-80aa-6eb36049f6b5-logs\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.137682 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/369424ef-89f9-462a-80aa-6eb36049f6b5-scripts\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.137715 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/369424ef-89f9-462a-80aa-6eb36049f6b5-config-data\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.137793 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgqrb\" (UniqueName: \"kubernetes.io/projected/369424ef-89f9-462a-80aa-6eb36049f6b5-kube-api-access-xgqrb\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.137855 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/369424ef-89f9-462a-80aa-6eb36049f6b5-public-tls-certs\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.162670 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.240034 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/369424ef-89f9-462a-80aa-6eb36049f6b5-public-tls-certs\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.240154 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/369424ef-89f9-462a-80aa-6eb36049f6b5-combined-ca-bundle\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.240172 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/369424ef-89f9-462a-80aa-6eb36049f6b5-internal-tls-certs\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.240314 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/369424ef-89f9-462a-80aa-6eb36049f6b5-logs\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.240338 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/369424ef-89f9-462a-80aa-6eb36049f6b5-scripts\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.240358 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/369424ef-89f9-462a-80aa-6eb36049f6b5-config-data\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.240458 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgqrb\" (UniqueName: \"kubernetes.io/projected/369424ef-89f9-462a-80aa-6eb36049f6b5-kube-api-access-xgqrb\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.243674 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/369424ef-89f9-462a-80aa-6eb36049f6b5-logs\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.250512 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/369424ef-89f9-462a-80aa-6eb36049f6b5-public-tls-certs\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.253768 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/369424ef-89f9-462a-80aa-6eb36049f6b5-internal-tls-certs\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.262326 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/369424ef-89f9-462a-80aa-6eb36049f6b5-combined-ca-bundle\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.271182 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/369424ef-89f9-462a-80aa-6eb36049f6b5-config-data\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.271419 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/369424ef-89f9-462a-80aa-6eb36049f6b5-scripts\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.311105 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgqrb\" (UniqueName: \"kubernetes.io/projected/369424ef-89f9-462a-80aa-6eb36049f6b5-kube-api-access-xgqrb\") pod \"placement-86cbcc788d-b897j\" (UID: \"369424ef-89f9-462a-80aa-6eb36049f6b5\") " pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.335050 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.365321 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6f7f76cb7-rnmdt"] Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.380524 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.384297 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.384503 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xkmhm" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.387396 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5dd9885f5b-xm6hz"] Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.390923 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.393439 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.424151 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.436412 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6f7f76cb7-rnmdt"] Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.443163 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-config-data\") pod \"barbican-keystone-listener-5dd9885f5b-xm6hz\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.443303 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-combined-ca-bundle\") pod \"barbican-keystone-listener-5dd9885f5b-xm6hz\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.443327 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-combined-ca-bundle\") pod \"barbican-worker-6f7f76cb7-rnmdt\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.443349 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/099ff49f-9143-4fa1-9844-cb66dc028aca-logs\") pod \"barbican-keystone-listener-5dd9885f5b-xm6hz\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.443393 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrj4c\" (UniqueName: \"kubernetes.io/projected/be7431e9-c408-49e2-80b8-4d13da26f0ee-kube-api-access-mrj4c\") pod \"barbican-worker-6f7f76cb7-rnmdt\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.443413 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-config-data-custom\") pod \"barbican-worker-6f7f76cb7-rnmdt\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.443439 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-config-data\") pod \"barbican-worker-6f7f76cb7-rnmdt\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.443459 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-config-data-custom\") pod \"barbican-keystone-listener-5dd9885f5b-xm6hz\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.443475 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmqfm\" (UniqueName: \"kubernetes.io/projected/099ff49f-9143-4fa1-9844-cb66dc028aca-kube-api-access-xmqfm\") pod \"barbican-keystone-listener-5dd9885f5b-xm6hz\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.443492 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be7431e9-c408-49e2-80b8-4d13da26f0ee-logs\") pod \"barbican-worker-6f7f76cb7-rnmdt\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.461269 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5dd9885f5b-xm6hz"] Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.544739 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-combined-ca-bundle\") pod \"barbican-keystone-listener-5dd9885f5b-xm6hz\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.544799 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-combined-ca-bundle\") pod \"barbican-worker-6f7f76cb7-rnmdt\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.544832 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/099ff49f-9143-4fa1-9844-cb66dc028aca-logs\") pod \"barbican-keystone-listener-5dd9885f5b-xm6hz\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.544905 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrj4c\" (UniqueName: \"kubernetes.io/projected/be7431e9-c408-49e2-80b8-4d13da26f0ee-kube-api-access-mrj4c\") pod \"barbican-worker-6f7f76cb7-rnmdt\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.544940 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-config-data-custom\") pod \"barbican-worker-6f7f76cb7-rnmdt\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.544973 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-config-data\") pod \"barbican-worker-6f7f76cb7-rnmdt\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.545484 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/099ff49f-9143-4fa1-9844-cb66dc028aca-logs\") pod \"barbican-keystone-listener-5dd9885f5b-xm6hz\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.548791 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-config-data-custom\") pod \"barbican-keystone-listener-5dd9885f5b-xm6hz\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.548852 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmqfm\" (UniqueName: \"kubernetes.io/projected/099ff49f-9143-4fa1-9844-cb66dc028aca-kube-api-access-xmqfm\") pod \"barbican-keystone-listener-5dd9885f5b-xm6hz\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.548882 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be7431e9-c408-49e2-80b8-4d13da26f0ee-logs\") pod \"barbican-worker-6f7f76cb7-rnmdt\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.548995 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-config-data\") pod \"barbican-keystone-listener-5dd9885f5b-xm6hz\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.550603 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be7431e9-c408-49e2-80b8-4d13da26f0ee-logs\") pod \"barbican-worker-6f7f76cb7-rnmdt\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.567768 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-combined-ca-bundle\") pod \"barbican-keystone-listener-5dd9885f5b-xm6hz\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.567815 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-config-data\") pod \"barbican-keystone-listener-5dd9885f5b-xm6hz\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.568761 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-config-data-custom\") pod \"barbican-keystone-listener-5dd9885f5b-xm6hz\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.571239 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-config-data-custom\") pod \"barbican-worker-6f7f76cb7-rnmdt\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.575660 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-config-data\") pod \"barbican-worker-6f7f76cb7-rnmdt\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.577488 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-combined-ca-bundle\") pod \"barbican-worker-6f7f76cb7-rnmdt\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.596577 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrj4c\" (UniqueName: \"kubernetes.io/projected/be7431e9-c408-49e2-80b8-4d13da26f0ee-kube-api-access-mrj4c\") pod \"barbican-worker-6f7f76cb7-rnmdt\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.664995 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmqfm\" (UniqueName: \"kubernetes.io/projected/099ff49f-9143-4fa1-9844-cb66dc028aca-kube-api-access-xmqfm\") pod \"barbican-keystone-listener-5dd9885f5b-xm6hz\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.674776 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-gcftg"] Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.684330 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.708121 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-gcftg"] Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.768956 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.778535 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.858956 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-config\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.859049 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.859099 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g5c4\" (UniqueName: \"kubernetes.io/projected/75275df0-97ad-49b4-ac22-558bb6b29857-kube-api-access-7g5c4\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.859160 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.859191 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.859322 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:34 crc kubenswrapper[4765]: I0121 13:23:34.963684 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g5c4\" (UniqueName: \"kubernetes.io/projected/75275df0-97ad-49b4-ac22-558bb6b29857-kube-api-access-7g5c4\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.014382 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.014653 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.014923 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.014999 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-config\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.015146 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.016099 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.003941 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7fd49c47b6-4hvtg"] Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.022860 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.024324 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.028296 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-config\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.040991 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.042961 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.082256 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g5c4\" (UniqueName: \"kubernetes.io/projected/75275df0-97ad-49b4-ac22-558bb6b29857-kube-api-access-7g5c4\") pod \"dnsmasq-dns-848cf88cfc-gcftg\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.121084 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aca8cf8-41b9-44a4-8948-94717695f201-combined-ca-bundle\") pod \"barbican-keystone-listener-7fd49c47b6-4hvtg\" (UID: \"8aca8cf8-41b9-44a4-8948-94717695f201\") " pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.121292 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aca8cf8-41b9-44a4-8948-94717695f201-logs\") pod \"barbican-keystone-listener-7fd49c47b6-4hvtg\" (UID: \"8aca8cf8-41b9-44a4-8948-94717695f201\") " pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.121479 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8aca8cf8-41b9-44a4-8948-94717695f201-config-data-custom\") pod \"barbican-keystone-listener-7fd49c47b6-4hvtg\" (UID: \"8aca8cf8-41b9-44a4-8948-94717695f201\") " pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.121700 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv26d\" (UniqueName: \"kubernetes.io/projected/8aca8cf8-41b9-44a4-8948-94717695f201-kube-api-access-mv26d\") pod \"barbican-keystone-listener-7fd49c47b6-4hvtg\" (UID: \"8aca8cf8-41b9-44a4-8948-94717695f201\") " pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.121948 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aca8cf8-41b9-44a4-8948-94717695f201-config-data\") pod \"barbican-keystone-listener-7fd49c47b6-4hvtg\" (UID: \"8aca8cf8-41b9-44a4-8948-94717695f201\") " pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.163853 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7fd49c47b6-4hvtg"] Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.199869 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-667d97cc75-tm9lv"] Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.201940 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.230462 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aca8cf8-41b9-44a4-8948-94717695f201-combined-ca-bundle\") pod \"barbican-keystone-listener-7fd49c47b6-4hvtg\" (UID: \"8aca8cf8-41b9-44a4-8948-94717695f201\") " pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.230730 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aca8cf8-41b9-44a4-8948-94717695f201-logs\") pod \"barbican-keystone-listener-7fd49c47b6-4hvtg\" (UID: \"8aca8cf8-41b9-44a4-8948-94717695f201\") " pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.231043 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8aca8cf8-41b9-44a4-8948-94717695f201-config-data-custom\") pod \"barbican-keystone-listener-7fd49c47b6-4hvtg\" (UID: \"8aca8cf8-41b9-44a4-8948-94717695f201\") " pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.231144 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv26d\" (UniqueName: \"kubernetes.io/projected/8aca8cf8-41b9-44a4-8948-94717695f201-kube-api-access-mv26d\") pod \"barbican-keystone-listener-7fd49c47b6-4hvtg\" (UID: \"8aca8cf8-41b9-44a4-8948-94717695f201\") " pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.231317 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aca8cf8-41b9-44a4-8948-94717695f201-config-data\") pod \"barbican-keystone-listener-7fd49c47b6-4hvtg\" (UID: \"8aca8cf8-41b9-44a4-8948-94717695f201\") " pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.242039 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aca8cf8-41b9-44a4-8948-94717695f201-logs\") pod \"barbican-keystone-listener-7fd49c47b6-4hvtg\" (UID: \"8aca8cf8-41b9-44a4-8948-94717695f201\") " pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.260233 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-667d97cc75-tm9lv"] Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.279502 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aca8cf8-41b9-44a4-8948-94717695f201-config-data\") pod \"barbican-keystone-listener-7fd49c47b6-4hvtg\" (UID: \"8aca8cf8-41b9-44a4-8948-94717695f201\") " pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.280964 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv26d\" (UniqueName: \"kubernetes.io/projected/8aca8cf8-41b9-44a4-8948-94717695f201-kube-api-access-mv26d\") pod \"barbican-keystone-listener-7fd49c47b6-4hvtg\" (UID: \"8aca8cf8-41b9-44a4-8948-94717695f201\") " pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.287966 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aca8cf8-41b9-44a4-8948-94717695f201-combined-ca-bundle\") pod \"barbican-keystone-listener-7fd49c47b6-4hvtg\" (UID: \"8aca8cf8-41b9-44a4-8948-94717695f201\") " pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.288451 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8aca8cf8-41b9-44a4-8948-94717695f201-config-data-custom\") pod \"barbican-keystone-listener-7fd49c47b6-4hvtg\" (UID: \"8aca8cf8-41b9-44a4-8948-94717695f201\") " pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.311176 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-75df64647b-fv9d5"] Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.313634 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.322372 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.328230 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75df64647b-fv9d5"] Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.340728 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9390565-b433-4d8e-a112-7f7539cbdc3e-config-data-custom\") pod \"barbican-worker-667d97cc75-tm9lv\" (UID: \"d9390565-b433-4d8e-a112-7f7539cbdc3e\") " pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.340793 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9390565-b433-4d8e-a112-7f7539cbdc3e-logs\") pod \"barbican-worker-667d97cc75-tm9lv\" (UID: \"d9390565-b433-4d8e-a112-7f7539cbdc3e\") " pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.340811 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9390565-b433-4d8e-a112-7f7539cbdc3e-config-data\") pod \"barbican-worker-667d97cc75-tm9lv\" (UID: \"d9390565-b433-4d8e-a112-7f7539cbdc3e\") " pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.340830 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbw9h\" (UniqueName: \"kubernetes.io/projected/d9390565-b433-4d8e-a112-7f7539cbdc3e-kube-api-access-bbw9h\") pod \"barbican-worker-667d97cc75-tm9lv\" (UID: \"d9390565-b433-4d8e-a112-7f7539cbdc3e\") " pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.340844 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9390565-b433-4d8e-a112-7f7539cbdc3e-combined-ca-bundle\") pod \"barbican-worker-667d97cc75-tm9lv\" (UID: \"d9390565-b433-4d8e-a112-7f7539cbdc3e\") " pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.353466 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.443157 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-combined-ca-bundle\") pod \"barbican-api-75df64647b-fv9d5\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.443620 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6322a13-8045-4ecb-bb13-6b249dbbc016-logs\") pod \"barbican-api-75df64647b-fv9d5\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.443687 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9390565-b433-4d8e-a112-7f7539cbdc3e-config-data-custom\") pod \"barbican-worker-667d97cc75-tm9lv\" (UID: \"d9390565-b433-4d8e-a112-7f7539cbdc3e\") " pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.443740 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9390565-b433-4d8e-a112-7f7539cbdc3e-logs\") pod \"barbican-worker-667d97cc75-tm9lv\" (UID: \"d9390565-b433-4d8e-a112-7f7539cbdc3e\") " pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.443761 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhd9r\" (UniqueName: \"kubernetes.io/projected/c6322a13-8045-4ecb-bb13-6b249dbbc016-kube-api-access-rhd9r\") pod \"barbican-api-75df64647b-fv9d5\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.443789 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9390565-b433-4d8e-a112-7f7539cbdc3e-config-data\") pod \"barbican-worker-667d97cc75-tm9lv\" (UID: \"d9390565-b433-4d8e-a112-7f7539cbdc3e\") " pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.443832 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbw9h\" (UniqueName: \"kubernetes.io/projected/d9390565-b433-4d8e-a112-7f7539cbdc3e-kube-api-access-bbw9h\") pod \"barbican-worker-667d97cc75-tm9lv\" (UID: \"d9390565-b433-4d8e-a112-7f7539cbdc3e\") " pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.443858 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9390565-b433-4d8e-a112-7f7539cbdc3e-combined-ca-bundle\") pod \"barbican-worker-667d97cc75-tm9lv\" (UID: \"d9390565-b433-4d8e-a112-7f7539cbdc3e\") " pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.443913 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-config-data-custom\") pod \"barbican-api-75df64647b-fv9d5\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.443995 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-config-data\") pod \"barbican-api-75df64647b-fv9d5\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.449199 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9390565-b433-4d8e-a112-7f7539cbdc3e-logs\") pod \"barbican-worker-667d97cc75-tm9lv\" (UID: \"d9390565-b433-4d8e-a112-7f7539cbdc3e\") " pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.451406 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9390565-b433-4d8e-a112-7f7539cbdc3e-combined-ca-bundle\") pod \"barbican-worker-667d97cc75-tm9lv\" (UID: \"d9390565-b433-4d8e-a112-7f7539cbdc3e\") " pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.462052 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9390565-b433-4d8e-a112-7f7539cbdc3e-config-data\") pod \"barbican-worker-667d97cc75-tm9lv\" (UID: \"d9390565-b433-4d8e-a112-7f7539cbdc3e\") " pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.463376 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9390565-b433-4d8e-a112-7f7539cbdc3e-config-data-custom\") pod \"barbican-worker-667d97cc75-tm9lv\" (UID: \"d9390565-b433-4d8e-a112-7f7539cbdc3e\") " pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.470386 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.538726 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbw9h\" (UniqueName: \"kubernetes.io/projected/d9390565-b433-4d8e-a112-7f7539cbdc3e-kube-api-access-bbw9h\") pod \"barbican-worker-667d97cc75-tm9lv\" (UID: \"d9390565-b433-4d8e-a112-7f7539cbdc3e\") " pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.547378 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-config-data-custom\") pod \"barbican-api-75df64647b-fv9d5\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.547459 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-config-data\") pod \"barbican-api-75df64647b-fv9d5\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.547534 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-combined-ca-bundle\") pod \"barbican-api-75df64647b-fv9d5\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.547606 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6322a13-8045-4ecb-bb13-6b249dbbc016-logs\") pod \"barbican-api-75df64647b-fv9d5\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.547668 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhd9r\" (UniqueName: \"kubernetes.io/projected/c6322a13-8045-4ecb-bb13-6b249dbbc016-kube-api-access-rhd9r\") pod \"barbican-api-75df64647b-fv9d5\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.549233 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6322a13-8045-4ecb-bb13-6b249dbbc016-logs\") pod \"barbican-api-75df64647b-fv9d5\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.563371 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-config-data-custom\") pod \"barbican-api-75df64647b-fv9d5\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.575090 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-config-data\") pod \"barbican-api-75df64647b-fv9d5\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.575882 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-combined-ca-bundle\") pod \"barbican-api-75df64647b-fv9d5\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.584407 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-667d97cc75-tm9lv" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.596569 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhd9r\" (UniqueName: \"kubernetes.io/projected/c6322a13-8045-4ecb-bb13-6b249dbbc016-kube-api-access-rhd9r\") pod \"barbican-api-75df64647b-fv9d5\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.603363 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c5d9867cf-9ffzm"] Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.653480 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.669283 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d57e410-03d3-422a-ba44-f5a2ed1e1417" path="/var/lib/kubelet/pods/5d57e410-03d3-422a-ba44-f5a2ed1e1417/volumes" Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.682756 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-86cbcc788d-b897j"] Jan 21 13:23:35 crc kubenswrapper[4765]: W0121 13:23:35.744443 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod369424ef_89f9_462a_80aa_6eb36049f6b5.slice/crio-6896548ed23c95a1de4ce92c068fa702417071dc8a5067a1177f78aed36e5961 WatchSource:0}: Error finding container 6896548ed23c95a1de4ce92c068fa702417071dc8a5067a1177f78aed36e5961: Status 404 returned error can't find the container with id 6896548ed23c95a1de4ce92c068fa702417071dc8a5067a1177f78aed36e5961 Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.963576 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c5d9867cf-9ffzm" event={"ID":"80b18085-cc60-4891-bf22-0c8535624d5b","Type":"ContainerStarted","Data":"2e8b7c6c3d1f914d4f55badee84ab83af5b0693746c374aa2feb49ea77e2703c"} Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.980703 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-v4h97" event={"ID":"3f0ee201-f570-4414-9feb-616192dfca3b","Type":"ContainerStarted","Data":"e65d21496902b707aaddc6034d3b49f4e82bf1523f99b3f1e8975ce3badc470a"} Jan 21 13:23:35 crc kubenswrapper[4765]: I0121 13:23:35.986760 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6f7f76cb7-rnmdt"] Jan 21 13:23:36 crc kubenswrapper[4765]: I0121 13:23:36.014558 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-86cbcc788d-b897j" event={"ID":"369424ef-89f9-462a-80aa-6eb36049f6b5","Type":"ContainerStarted","Data":"6896548ed23c95a1de4ce92c068fa702417071dc8a5067a1177f78aed36e5961"} Jan 21 13:23:36 crc kubenswrapper[4765]: I0121 13:23:36.034881 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-v4h97" podStartSLOduration=5.979505824 podStartE2EDuration="1m3.034863893s" podCreationTimestamp="2026-01-21 13:22:33 +0000 UTC" firstStartedPulling="2026-01-21 13:22:35.619226874 +0000 UTC m=+1216.636952696" lastFinishedPulling="2026-01-21 13:23:32.674584943 +0000 UTC m=+1273.692310765" observedRunningTime="2026-01-21 13:23:36.014647743 +0000 UTC m=+1277.032373565" watchObservedRunningTime="2026-01-21 13:23:36.034863893 +0000 UTC m=+1277.052589715" Jan 21 13:23:36 crc kubenswrapper[4765]: I0121 13:23:36.191664 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5dd9885f5b-xm6hz"] Jan 21 13:23:36 crc kubenswrapper[4765]: I0121 13:23:36.478887 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-gcftg"] Jan 21 13:23:36 crc kubenswrapper[4765]: I0121 13:23:36.751359 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-667d97cc75-tm9lv"] Jan 21 13:23:36 crc kubenswrapper[4765]: I0121 13:23:36.760248 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7fd49c47b6-4hvtg"] Jan 21 13:23:36 crc kubenswrapper[4765]: I0121 13:23:36.808270 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75df64647b-fv9d5"] Jan 21 13:23:36 crc kubenswrapper[4765]: W0121 13:23:36.814506 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9390565_b433_4d8e_a112_7f7539cbdc3e.slice/crio-74ab41f42f5cb42299433013fa7486dda29eadeecfcfc98aaf04842845edfa3c WatchSource:0}: Error finding container 74ab41f42f5cb42299433013fa7486dda29eadeecfcfc98aaf04842845edfa3c: Status 404 returned error can't find the container with id 74ab41f42f5cb42299433013fa7486dda29eadeecfcfc98aaf04842845edfa3c Jan 21 13:23:37 crc kubenswrapper[4765]: I0121 13:23:37.033878 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c5d9867cf-9ffzm" event={"ID":"80b18085-cc60-4891-bf22-0c8535624d5b","Type":"ContainerStarted","Data":"2b99ec0c0d897c1ec56cc647fc86ed32a785f73f2774fe0e346e7ce6b6eba31e"} Jan 21 13:23:37 crc kubenswrapper[4765]: I0121 13:23:37.036537 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:23:37 crc kubenswrapper[4765]: I0121 13:23:37.042299 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" event={"ID":"75275df0-97ad-49b4-ac22-558bb6b29857","Type":"ContainerStarted","Data":"a95653bc49d55bcd750e5c8d3150e27665c9822c1e52783ec972c3e2c0ecdfa0"} Jan 21 13:23:37 crc kubenswrapper[4765]: I0121 13:23:37.056280 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75df64647b-fv9d5" event={"ID":"c6322a13-8045-4ecb-bb13-6b249dbbc016","Type":"ContainerStarted","Data":"ca007329e3ecc07575fde1b2de9541105a035ffe127b41696ba8ff98f1d0e0d5"} Jan 21 13:23:37 crc kubenswrapper[4765]: I0121 13:23:37.061166 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" event={"ID":"099ff49f-9143-4fa1-9844-cb66dc028aca","Type":"ContainerStarted","Data":"11bae7532db028342299c3a755cb94ec9bc9251c4f316e6bfdb0cd899d30d8ba"} Jan 21 13:23:37 crc kubenswrapper[4765]: I0121 13:23:37.081143 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7c5d9867cf-9ffzm" podStartSLOduration=4.081118998 podStartE2EDuration="4.081118998s" podCreationTimestamp="2026-01-21 13:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:23:37.065908455 +0000 UTC m=+1278.083634277" watchObservedRunningTime="2026-01-21 13:23:37.081118998 +0000 UTC m=+1278.098844820" Jan 21 13:23:37 crc kubenswrapper[4765]: I0121 13:23:37.081661 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-667d97cc75-tm9lv" event={"ID":"d9390565-b433-4d8e-a112-7f7539cbdc3e","Type":"ContainerStarted","Data":"74ab41f42f5cb42299433013fa7486dda29eadeecfcfc98aaf04842845edfa3c"} Jan 21 13:23:37 crc kubenswrapper[4765]: I0121 13:23:37.122479 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f7f76cb7-rnmdt" event={"ID":"be7431e9-c408-49e2-80b8-4d13da26f0ee","Type":"ContainerStarted","Data":"acda5a870f88926f36af29f82545df661ff4c938f6ea5dc80fcd1374f17b6593"} Jan 21 13:23:37 crc kubenswrapper[4765]: I0121 13:23:37.159417 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" event={"ID":"8aca8cf8-41b9-44a4-8948-94717695f201","Type":"ContainerStarted","Data":"6777cd849057e0ff70e5496148fcb81b7b6011f02c0e07f6bd31281d892483d7"} Jan 21 13:23:38 crc kubenswrapper[4765]: I0121 13:23:38.194084 4765 generic.go:334] "Generic (PLEG): container finished" podID="75275df0-97ad-49b4-ac22-558bb6b29857" containerID="931ac5bdd0611358ff04b89c6cad124fdeb5af3905540ea18feccd137719879f" exitCode=0 Jan 21 13:23:38 crc kubenswrapper[4765]: I0121 13:23:38.194802 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" event={"ID":"75275df0-97ad-49b4-ac22-558bb6b29857","Type":"ContainerDied","Data":"931ac5bdd0611358ff04b89c6cad124fdeb5af3905540ea18feccd137719879f"} Jan 21 13:23:38 crc kubenswrapper[4765]: I0121 13:23:38.213403 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75df64647b-fv9d5" event={"ID":"c6322a13-8045-4ecb-bb13-6b249dbbc016","Type":"ContainerStarted","Data":"d1dc232164a2476a3d98a1da36af9d0c60ccf029c9004224a65944d9a0016a87"} Jan 21 13:23:38 crc kubenswrapper[4765]: I0121 13:23:38.216170 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-86cbcc788d-b897j" event={"ID":"369424ef-89f9-462a-80aa-6eb36049f6b5","Type":"ContainerStarted","Data":"c853b66fbc7252164ab7b756a7c443e3cd3edf83c596333c980640ac2e68e03b"} Jan 21 13:23:38 crc kubenswrapper[4765]: I0121 13:23:38.216226 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-86cbcc788d-b897j" event={"ID":"369424ef-89f9-462a-80aa-6eb36049f6b5","Type":"ContainerStarted","Data":"bb720ede725fe828ac9b0b023aa6270ad7c4a796d70739680081d83ba876595b"} Jan 21 13:23:38 crc kubenswrapper[4765]: I0121 13:23:38.265156 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-86cbcc788d-b897j" podStartSLOduration=5.265133353 podStartE2EDuration="5.265133353s" podCreationTimestamp="2026-01-21 13:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:23:38.247490699 +0000 UTC m=+1279.265216541" watchObservedRunningTime="2026-01-21 13:23:38.265133353 +0000 UTC m=+1279.282859175" Jan 21 13:23:39 crc kubenswrapper[4765]: I0121 13:23:39.242334 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75df64647b-fv9d5" event={"ID":"c6322a13-8045-4ecb-bb13-6b249dbbc016","Type":"ContainerStarted","Data":"7dd5469de316831b449b93119b03a822a408d993a6d205fc477bbf0eb45c270e"} Jan 21 13:23:39 crc kubenswrapper[4765]: I0121 13:23:39.242717 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:39 crc kubenswrapper[4765]: I0121 13:23:39.242736 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:23:39 crc kubenswrapper[4765]: I0121 13:23:39.242751 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:39 crc kubenswrapper[4765]: I0121 13:23:39.242763 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:39 crc kubenswrapper[4765]: I0121 13:23:39.283817 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-75df64647b-fv9d5" podStartSLOduration=4.283794535 podStartE2EDuration="4.283794535s" podCreationTimestamp="2026-01-21 13:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:23:39.283169677 +0000 UTC m=+1280.300895499" watchObservedRunningTime="2026-01-21 13:23:39.283794535 +0000 UTC m=+1280.301520357" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.228937 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6ccc6775fd-qhnc2"] Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.236016 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.242034 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.275596 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6ccc6775fd-qhnc2"] Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.278542 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.319983 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" event={"ID":"75275df0-97ad-49b4-ac22-558bb6b29857","Type":"ContainerStarted","Data":"f824fc8b37baf987c95380beb2b679b54e84b2154bb3b3a6bd1202d3b35635f6"} Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.320397 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.385595 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4424b63d-0688-473e-80e8-8cd4148911a1-combined-ca-bundle\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.385693 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4424b63d-0688-473e-80e8-8cd4148911a1-internal-tls-certs\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.385755 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4424b63d-0688-473e-80e8-8cd4148911a1-public-tls-certs\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.385803 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvpv7\" (UniqueName: \"kubernetes.io/projected/4424b63d-0688-473e-80e8-8cd4148911a1-kube-api-access-rvpv7\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.385846 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4424b63d-0688-473e-80e8-8cd4148911a1-config-data-custom\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.385879 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4424b63d-0688-473e-80e8-8cd4148911a1-logs\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.385953 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4424b63d-0688-473e-80e8-8cd4148911a1-config-data\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.394986 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" podStartSLOduration=6.394964474 podStartE2EDuration="6.394964474s" podCreationTimestamp="2026-01-21 13:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:23:40.385526689 +0000 UTC m=+1281.403252511" watchObservedRunningTime="2026-01-21 13:23:40.394964474 +0000 UTC m=+1281.412690306" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.488289 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvpv7\" (UniqueName: \"kubernetes.io/projected/4424b63d-0688-473e-80e8-8cd4148911a1-kube-api-access-rvpv7\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.488369 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4424b63d-0688-473e-80e8-8cd4148911a1-config-data-custom\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.488416 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4424b63d-0688-473e-80e8-8cd4148911a1-logs\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.488540 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4424b63d-0688-473e-80e8-8cd4148911a1-config-data\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.488625 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4424b63d-0688-473e-80e8-8cd4148911a1-combined-ca-bundle\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.488731 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4424b63d-0688-473e-80e8-8cd4148911a1-internal-tls-certs\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.488824 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4424b63d-0688-473e-80e8-8cd4148911a1-public-tls-certs\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.490804 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4424b63d-0688-473e-80e8-8cd4148911a1-logs\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.500691 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4424b63d-0688-473e-80e8-8cd4148911a1-config-data-custom\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.505027 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4424b63d-0688-473e-80e8-8cd4148911a1-config-data\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.505199 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4424b63d-0688-473e-80e8-8cd4148911a1-internal-tls-certs\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.517859 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4424b63d-0688-473e-80e8-8cd4148911a1-combined-ca-bundle\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.542994 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4424b63d-0688-473e-80e8-8cd4148911a1-public-tls-certs\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.551020 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvpv7\" (UniqueName: \"kubernetes.io/projected/4424b63d-0688-473e-80e8-8cd4148911a1-kube-api-access-rvpv7\") pod \"barbican-api-6ccc6775fd-qhnc2\" (UID: \"4424b63d-0688-473e-80e8-8cd4148911a1\") " pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:40 crc kubenswrapper[4765]: I0121 13:23:40.626716 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:43 crc kubenswrapper[4765]: I0121 13:23:43.281156 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 21 13:23:43 crc kubenswrapper[4765]: I0121 13:23:43.377307 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c57777f6-gqpgv" podUID="1241b1f0-34c1-401a-b91f-13b72926cc2c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 21 13:23:43 crc kubenswrapper[4765]: I0121 13:23:43.856302 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6ccc6775fd-qhnc2"] Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.429536 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6ccc6775fd-qhnc2" event={"ID":"4424b63d-0688-473e-80e8-8cd4148911a1","Type":"ContainerStarted","Data":"87cd8284ac74eb23dbe76a6fc0b5a2513180126ee9b09a66b1ee897d6ca75ee2"} Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.429792 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6ccc6775fd-qhnc2" event={"ID":"4424b63d-0688-473e-80e8-8cd4148911a1","Type":"ContainerStarted","Data":"ebe3ee2bcef1113a1356a227166e63e7e7c80b8ee15bc8c517778b5d4a6957a7"} Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.439017 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" event={"ID":"099ff49f-9143-4fa1-9844-cb66dc028aca","Type":"ContainerStarted","Data":"e77823b748ab01858ea881864999dee9c96060c6f517ef60c0f718e508b9a594"} Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.439056 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" event={"ID":"099ff49f-9143-4fa1-9844-cb66dc028aca","Type":"ContainerStarted","Data":"1fdf21e941ff05ff463d85795b7227c43d6220b563240ffe6c900ab82f07b728"} Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.448562 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-667d97cc75-tm9lv" event={"ID":"d9390565-b433-4d8e-a112-7f7539cbdc3e","Type":"ContainerStarted","Data":"c56833076f5226f5f882365777a6bc605c0654146220aab866ff296dee2e975c"} Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.448614 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-667d97cc75-tm9lv" event={"ID":"d9390565-b433-4d8e-a112-7f7539cbdc3e","Type":"ContainerStarted","Data":"beac388b60a9dad0d57a96c9b9dc8ef26bef3b4bd913e4d66e0fdb9fd7c4bac2"} Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.454454 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f7f76cb7-rnmdt" event={"ID":"be7431e9-c408-49e2-80b8-4d13da26f0ee","Type":"ContainerStarted","Data":"c8334accf94bed406501ddfac62757bc2cb5cd307c267a320c0a1376f8cf1c9e"} Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.454539 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f7f76cb7-rnmdt" event={"ID":"be7431e9-c408-49e2-80b8-4d13da26f0ee","Type":"ContainerStarted","Data":"f6f24da14aa07819931402b685115b0ac9c3fbbf1ea8954ee43b5d6dda5db9f2"} Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.459263 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" event={"ID":"8aca8cf8-41b9-44a4-8948-94717695f201","Type":"ContainerStarted","Data":"6a286c366e3ab975b3e21d4163c64396c44310513ee359a51f38aa1fabd75980"} Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.459305 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" event={"ID":"8aca8cf8-41b9-44a4-8948-94717695f201","Type":"ContainerStarted","Data":"49da5fe1dd35954dee2179d5c636c29ea0290399078afd520497fd0e1e114988"} Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.478223 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" podStartSLOduration=3.653127382 podStartE2EDuration="10.47819107s" podCreationTimestamp="2026-01-21 13:23:34 +0000 UTC" firstStartedPulling="2026-01-21 13:23:36.358274076 +0000 UTC m=+1277.375999898" lastFinishedPulling="2026-01-21 13:23:43.183337764 +0000 UTC m=+1284.201063586" observedRunningTime="2026-01-21 13:23:44.467468097 +0000 UTC m=+1285.485193919" watchObservedRunningTime="2026-01-21 13:23:44.47819107 +0000 UTC m=+1285.495916892" Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.505445 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-667d97cc75-tm9lv" podStartSLOduration=4.135902573 podStartE2EDuration="10.505428245s" podCreationTimestamp="2026-01-21 13:23:34 +0000 UTC" firstStartedPulling="2026-01-21 13:23:36.824907225 +0000 UTC m=+1277.842633047" lastFinishedPulling="2026-01-21 13:23:43.194432897 +0000 UTC m=+1284.212158719" observedRunningTime="2026-01-21 13:23:44.499966175 +0000 UTC m=+1285.517691997" watchObservedRunningTime="2026-01-21 13:23:44.505428245 +0000 UTC m=+1285.523154067" Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.541578 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6f7f76cb7-rnmdt" podStartSLOduration=3.6633563909999998 podStartE2EDuration="10.541562089s" podCreationTimestamp="2026-01-21 13:23:34 +0000 UTC" firstStartedPulling="2026-01-21 13:23:36.317680042 +0000 UTC m=+1277.335405864" lastFinishedPulling="2026-01-21 13:23:43.19588574 +0000 UTC m=+1284.213611562" observedRunningTime="2026-01-21 13:23:44.534247125 +0000 UTC m=+1285.551972937" watchObservedRunningTime="2026-01-21 13:23:44.541562089 +0000 UTC m=+1285.559287911" Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.575672 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7fd49c47b6-4hvtg" podStartSLOduration=4.177186938 podStartE2EDuration="10.575645743s" podCreationTimestamp="2026-01-21 13:23:34 +0000 UTC" firstStartedPulling="2026-01-21 13:23:36.797251399 +0000 UTC m=+1277.814977221" lastFinishedPulling="2026-01-21 13:23:43.195710204 +0000 UTC m=+1284.213436026" observedRunningTime="2026-01-21 13:23:44.566087354 +0000 UTC m=+1285.583813176" watchObservedRunningTime="2026-01-21 13:23:44.575645743 +0000 UTC m=+1285.593371565" Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.600511 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6f7f76cb7-rnmdt"] Jan 21 13:23:44 crc kubenswrapper[4765]: I0121 13:23:44.624531 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5dd9885f5b-xm6hz"] Jan 21 13:23:45 crc kubenswrapper[4765]: I0121 13:23:45.355396 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:23:45 crc kubenswrapper[4765]: I0121 13:23:45.438342 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-x5sp2"] Jan 21 13:23:45 crc kubenswrapper[4765]: I0121 13:23:45.439194 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" podUID="87f9269a-2c20-4132-8f2c-c5e8c7493fc9" containerName="dnsmasq-dns" containerID="cri-o://f8a73ccbe593ba79f289915182b7e1741421333934829e5bb29ff7ccc180175a" gracePeriod=10 Jan 21 13:23:45 crc kubenswrapper[4765]: I0121 13:23:45.485866 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6ccc6775fd-qhnc2" event={"ID":"4424b63d-0688-473e-80e8-8cd4148911a1","Type":"ContainerStarted","Data":"48be97ceedadb594ba1dc6a8c70f5f8e8a46142bb5a1f933d121f7b8495bb308"} Jan 21 13:23:45 crc kubenswrapper[4765]: I0121 13:23:45.486983 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:45 crc kubenswrapper[4765]: I0121 13:23:45.487272 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:45 crc kubenswrapper[4765]: I0121 13:23:45.531410 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6ccc6775fd-qhnc2" podStartSLOduration=5.531387889 podStartE2EDuration="5.531387889s" podCreationTimestamp="2026-01-21 13:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:23:45.526171827 +0000 UTC m=+1286.543897649" watchObservedRunningTime="2026-01-21 13:23:45.531387889 +0000 UTC m=+1286.549113711" Jan 21 13:23:45 crc kubenswrapper[4765]: I0121 13:23:45.959113 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" podUID="87f9269a-2c20-4132-8f2c-c5e8c7493fc9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.154:5353: connect: connection refused" Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.501660 4765 generic.go:334] "Generic (PLEG): container finished" podID="87f9269a-2c20-4132-8f2c-c5e8c7493fc9" containerID="f8a73ccbe593ba79f289915182b7e1741421333934829e5bb29ff7ccc180175a" exitCode=0 Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.502142 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6f7f76cb7-rnmdt" podUID="be7431e9-c408-49e2-80b8-4d13da26f0ee" containerName="barbican-worker-log" containerID="cri-o://f6f24da14aa07819931402b685115b0ac9c3fbbf1ea8954ee43b5d6dda5db9f2" gracePeriod=30 Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.502546 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" event={"ID":"87f9269a-2c20-4132-8f2c-c5e8c7493fc9","Type":"ContainerDied","Data":"f8a73ccbe593ba79f289915182b7e1741421333934829e5bb29ff7ccc180175a"} Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.502584 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" event={"ID":"87f9269a-2c20-4132-8f2c-c5e8c7493fc9","Type":"ContainerDied","Data":"4172b9d63e26ba467db14305e76bcd3e8f8c6ddbf765821735dd60a1c1b06bc3"} Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.502599 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4172b9d63e26ba467db14305e76bcd3e8f8c6ddbf765821735dd60a1c1b06bc3" Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.502729 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" podUID="099ff49f-9143-4fa1-9844-cb66dc028aca" containerName="barbican-keystone-listener-log" containerID="cri-o://1fdf21e941ff05ff463d85795b7227c43d6220b563240ffe6c900ab82f07b728" gracePeriod=30 Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.504150 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6f7f76cb7-rnmdt" podUID="be7431e9-c408-49e2-80b8-4d13da26f0ee" containerName="barbican-worker" containerID="cri-o://c8334accf94bed406501ddfac62757bc2cb5cd307c267a320c0a1376f8cf1c9e" gracePeriod=30 Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.504251 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" podUID="099ff49f-9143-4fa1-9844-cb66dc028aca" containerName="barbican-keystone-listener" containerID="cri-o://e77823b748ab01858ea881864999dee9c96060c6f517ef60c0f718e508b9a594" gracePeriod=30 Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.632830 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.663298 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-dns-svc\") pod \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.663353 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-ovsdbserver-sb\") pod \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.663441 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5gnt\" (UniqueName: \"kubernetes.io/projected/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-kube-api-access-q5gnt\") pod \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.663530 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-ovsdbserver-nb\") pod \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.663629 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-dns-swift-storage-0\") pod \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.663668 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-config\") pod \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\" (UID: \"87f9269a-2c20-4132-8f2c-c5e8c7493fc9\") " Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.864938 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-kube-api-access-q5gnt" (OuterVolumeSpecName: "kube-api-access-q5gnt") pod "87f9269a-2c20-4132-8f2c-c5e8c7493fc9" (UID: "87f9269a-2c20-4132-8f2c-c5e8c7493fc9"). InnerVolumeSpecName "kube-api-access-q5gnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.909479 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5gnt\" (UniqueName: \"kubernetes.io/projected/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-kube-api-access-q5gnt\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.924502 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "87f9269a-2c20-4132-8f2c-c5e8c7493fc9" (UID: "87f9269a-2c20-4132-8f2c-c5e8c7493fc9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.924827 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "87f9269a-2c20-4132-8f2c-c5e8c7493fc9" (UID: "87f9269a-2c20-4132-8f2c-c5e8c7493fc9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.927654 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "87f9269a-2c20-4132-8f2c-c5e8c7493fc9" (UID: "87f9269a-2c20-4132-8f2c-c5e8c7493fc9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.955878 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-config" (OuterVolumeSpecName: "config") pod "87f9269a-2c20-4132-8f2c-c5e8c7493fc9" (UID: "87f9269a-2c20-4132-8f2c-c5e8c7493fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:46 crc kubenswrapper[4765]: I0121 13:23:46.969220 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "87f9269a-2c20-4132-8f2c-c5e8c7493fc9" (UID: "87f9269a-2c20-4132-8f2c-c5e8c7493fc9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:23:47 crc kubenswrapper[4765]: I0121 13:23:47.012035 4765 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:47 crc kubenswrapper[4765]: I0121 13:23:47.012077 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:47 crc kubenswrapper[4765]: I0121 13:23:47.012097 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:47 crc kubenswrapper[4765]: I0121 13:23:47.012106 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:47 crc kubenswrapper[4765]: I0121 13:23:47.012115 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87f9269a-2c20-4132-8f2c-c5e8c7493fc9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:47 crc kubenswrapper[4765]: I0121 13:23:47.470994 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:47 crc kubenswrapper[4765]: I0121 13:23:47.525784 4765 generic.go:334] "Generic (PLEG): container finished" podID="be7431e9-c408-49e2-80b8-4d13da26f0ee" containerID="f6f24da14aa07819931402b685115b0ac9c3fbbf1ea8954ee43b5d6dda5db9f2" exitCode=143 Jan 21 13:23:47 crc kubenswrapper[4765]: I0121 13:23:47.525897 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f7f76cb7-rnmdt" event={"ID":"be7431e9-c408-49e2-80b8-4d13da26f0ee","Type":"ContainerDied","Data":"f6f24da14aa07819931402b685115b0ac9c3fbbf1ea8954ee43b5d6dda5db9f2"} Jan 21 13:23:47 crc kubenswrapper[4765]: I0121 13:23:47.528440 4765 generic.go:334] "Generic (PLEG): container finished" podID="099ff49f-9143-4fa1-9844-cb66dc028aca" containerID="1fdf21e941ff05ff463d85795b7227c43d6220b563240ffe6c900ab82f07b728" exitCode=143 Jan 21 13:23:47 crc kubenswrapper[4765]: I0121 13:23:47.528524 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-x5sp2" Jan 21 13:23:47 crc kubenswrapper[4765]: I0121 13:23:47.535945 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" event={"ID":"099ff49f-9143-4fa1-9844-cb66dc028aca","Type":"ContainerDied","Data":"1fdf21e941ff05ff463d85795b7227c43d6220b563240ffe6c900ab82f07b728"} Jan 21 13:23:47 crc kubenswrapper[4765]: I0121 13:23:47.585031 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-x5sp2"] Jan 21 13:23:47 crc kubenswrapper[4765]: I0121 13:23:47.665984 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-x5sp2"] Jan 21 13:23:48 crc kubenswrapper[4765]: I0121 13:23:48.240270 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77dcd8ffdf-64j8s" Jan 21 13:23:48 crc kubenswrapper[4765]: I0121 13:23:48.327533 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-858874fc56-6kgbs"] Jan 21 13:23:48 crc kubenswrapper[4765]: I0121 13:23:48.327947 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-858874fc56-6kgbs" podUID="06ec9aac-7fc7-4070-bfc1-a23f1a27060a" containerName="neutron-api" containerID="cri-o://c2cc1c8c9784782125e37e34ddd38da385ce68363964beb236e4a274e1b53cc1" gracePeriod=30 Jan 21 13:23:48 crc kubenswrapper[4765]: I0121 13:23:48.328347 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-858874fc56-6kgbs" podUID="06ec9aac-7fc7-4070-bfc1-a23f1a27060a" containerName="neutron-httpd" containerID="cri-o://f147b57d7bb3d9a984b44d1d501cab848a2b423001fc765a7195550a05e30cf9" gracePeriod=30 Jan 21 13:23:49 crc kubenswrapper[4765]: I0121 13:23:49.587554 4765 generic.go:334] "Generic (PLEG): container finished" podID="06ec9aac-7fc7-4070-bfc1-a23f1a27060a" containerID="f147b57d7bb3d9a984b44d1d501cab848a2b423001fc765a7195550a05e30cf9" exitCode=0 Jan 21 13:23:49 crc kubenswrapper[4765]: I0121 13:23:49.587615 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-858874fc56-6kgbs" event={"ID":"06ec9aac-7fc7-4070-bfc1-a23f1a27060a","Type":"ContainerDied","Data":"f147b57d7bb3d9a984b44d1d501cab848a2b423001fc765a7195550a05e30cf9"} Jan 21 13:23:49 crc kubenswrapper[4765]: I0121 13:23:49.643159 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f9269a-2c20-4132-8f2c-c5e8c7493fc9" path="/var/lib/kubelet/pods/87f9269a-2c20-4132-8f2c-c5e8c7493fc9/volumes" Jan 21 13:23:49 crc kubenswrapper[4765]: I0121 13:23:49.739385 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-75df64647b-fv9d5" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 13:23:49 crc kubenswrapper[4765]: I0121 13:23:49.739709 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-75df64647b-fv9d5" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 13:23:50 crc kubenswrapper[4765]: I0121 13:23:50.352327 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:50 crc kubenswrapper[4765]: I0121 13:23:50.592826 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:50 crc kubenswrapper[4765]: I0121 13:23:50.609433 4765 generic.go:334] "Generic (PLEG): container finished" podID="3f0ee201-f570-4414-9feb-616192dfca3b" containerID="e65d21496902b707aaddc6034d3b49f4e82bf1523f99b3f1e8975ce3badc470a" exitCode=0 Jan 21 13:23:50 crc kubenswrapper[4765]: I0121 13:23:50.609515 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-v4h97" event={"ID":"3f0ee201-f570-4414-9feb-616192dfca3b","Type":"ContainerDied","Data":"e65d21496902b707aaddc6034d3b49f4e82bf1523f99b3f1e8975ce3badc470a"} Jan 21 13:23:50 crc kubenswrapper[4765]: I0121 13:23:50.629573 4765 generic.go:334] "Generic (PLEG): container finished" podID="06ec9aac-7fc7-4070-bfc1-a23f1a27060a" containerID="c2cc1c8c9784782125e37e34ddd38da385ce68363964beb236e4a274e1b53cc1" exitCode=0 Jan 21 13:23:50 crc kubenswrapper[4765]: I0121 13:23:50.630936 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-858874fc56-6kgbs" event={"ID":"06ec9aac-7fc7-4070-bfc1-a23f1a27060a","Type":"ContainerDied","Data":"c2cc1c8c9784782125e37e34ddd38da385ce68363964beb236e4a274e1b53cc1"} Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.086688 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.205569 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6ccc6775fd-qhnc2" Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.275041 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75df64647b-fv9d5"] Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.275359 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75df64647b-fv9d5" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api-log" containerID="cri-o://d1dc232164a2476a3d98a1da36af9d0c60ccf029c9004224a65944d9a0016a87" gracePeriod=30 Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.275416 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75df64647b-fv9d5" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api" containerID="cri-o://7dd5469de316831b449b93119b03a822a408d993a6d205fc477bbf0eb45c270e" gracePeriod=30 Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.297481 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-75df64647b-fv9d5" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": EOF" Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.297978 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75df64647b-fv9d5" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": EOF" Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.297191 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.298459 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.299377 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"aa436e74a6fd1c1c3a4ed7348015c8f931d8a51210c3f7b94c4c01885524ce52"} pod="openstack/horizon-6558674dbd-lct5s" containerMessage="Container horizon failed startup probe, will be restarted" Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.299424 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" containerID="cri-o://aa436e74a6fd1c1c3a4ed7348015c8f931d8a51210c3f7b94c4c01885524ce52" gracePeriod=30 Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.377805 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c57777f6-gqpgv" podUID="1241b1f0-34c1-401a-b91f-13b72926cc2c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.377902 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.378777 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"46f1a7c9396eca5402ea7a2319db77d5ead07a4127c2f33dffbb8adc136e01da"} pod="openstack/horizon-86c57777f6-gqpgv" containerMessage="Container horizon failed startup probe, will be restarted" Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.378809 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-86c57777f6-gqpgv" podUID="1241b1f0-34c1-401a-b91f-13b72926cc2c" containerName="horizon" containerID="cri-o://46f1a7c9396eca5402ea7a2319db77d5ead07a4127c2f33dffbb8adc136e01da" gracePeriod=30 Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.725881 4765 generic.go:334] "Generic (PLEG): container finished" podID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerID="d1dc232164a2476a3d98a1da36af9d0c60ccf029c9004224a65944d9a0016a87" exitCode=143 Jan 21 13:23:53 crc kubenswrapper[4765]: I0121 13:23:53.725936 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75df64647b-fv9d5" event={"ID":"c6322a13-8045-4ecb-bb13-6b249dbbc016","Type":"ContainerDied","Data":"d1dc232164a2476a3d98a1da36af9d0c60ccf029c9004224a65944d9a0016a87"} Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.181289 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-v4h97" Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.303599 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-db-sync-config-data\") pod \"3f0ee201-f570-4414-9feb-616192dfca3b\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.303653 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-scripts\") pod \"3f0ee201-f570-4414-9feb-616192dfca3b\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.303684 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-combined-ca-bundle\") pod \"3f0ee201-f570-4414-9feb-616192dfca3b\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.303740 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-config-data\") pod \"3f0ee201-f570-4414-9feb-616192dfca3b\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.303767 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f0ee201-f570-4414-9feb-616192dfca3b-etc-machine-id\") pod \"3f0ee201-f570-4414-9feb-616192dfca3b\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.303807 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4kwq\" (UniqueName: \"kubernetes.io/projected/3f0ee201-f570-4414-9feb-616192dfca3b-kube-api-access-k4kwq\") pod \"3f0ee201-f570-4414-9feb-616192dfca3b\" (UID: \"3f0ee201-f570-4414-9feb-616192dfca3b\") " Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.306125 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f0ee201-f570-4414-9feb-616192dfca3b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3f0ee201-f570-4414-9feb-616192dfca3b" (UID: "3f0ee201-f570-4414-9feb-616192dfca3b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.313350 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-scripts" (OuterVolumeSpecName: "scripts") pod "3f0ee201-f570-4414-9feb-616192dfca3b" (UID: "3f0ee201-f570-4414-9feb-616192dfca3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.314624 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3f0ee201-f570-4414-9feb-616192dfca3b" (UID: "3f0ee201-f570-4414-9feb-616192dfca3b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.314772 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f0ee201-f570-4414-9feb-616192dfca3b-kube-api-access-k4kwq" (OuterVolumeSpecName: "kube-api-access-k4kwq") pod "3f0ee201-f570-4414-9feb-616192dfca3b" (UID: "3f0ee201-f570-4414-9feb-616192dfca3b"). InnerVolumeSpecName "kube-api-access-k4kwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.402615 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-config-data" (OuterVolumeSpecName: "config-data") pod "3f0ee201-f570-4414-9feb-616192dfca3b" (UID: "3f0ee201-f570-4414-9feb-616192dfca3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.408031 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.408057 4765 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f0ee201-f570-4414-9feb-616192dfca3b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.408068 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4kwq\" (UniqueName: \"kubernetes.io/projected/3f0ee201-f570-4414-9feb-616192dfca3b-kube-api-access-k4kwq\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.408076 4765 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.408084 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.415330 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f0ee201-f570-4414-9feb-616192dfca3b" (UID: "3f0ee201-f570-4414-9feb-616192dfca3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.512818 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f0ee201-f570-4414-9feb-616192dfca3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.753012 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-v4h97" event={"ID":"3f0ee201-f570-4414-9feb-616192dfca3b","Type":"ContainerDied","Data":"3c1012177a49b4c2fcf07a16a467d0a50c02a5deddde6df208feb01bdb2eb84c"} Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.753049 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c1012177a49b4c2fcf07a16a467d0a50c02a5deddde6df208feb01bdb2eb84c" Jan 21 13:23:55 crc kubenswrapper[4765]: I0121 13:23:55.753123 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-v4h97" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.828613 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:23:56 crc kubenswrapper[4765]: E0121 13:23:56.829438 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f9269a-2c20-4132-8f2c-c5e8c7493fc9" containerName="dnsmasq-dns" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.829459 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f9269a-2c20-4132-8f2c-c5e8c7493fc9" containerName="dnsmasq-dns" Jan 21 13:23:56 crc kubenswrapper[4765]: E0121 13:23:56.829477 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f9269a-2c20-4132-8f2c-c5e8c7493fc9" containerName="init" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.829485 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f9269a-2c20-4132-8f2c-c5e8c7493fc9" containerName="init" Jan 21 13:23:56 crc kubenswrapper[4765]: E0121 13:23:56.829507 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f0ee201-f570-4414-9feb-616192dfca3b" containerName="cinder-db-sync" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.829515 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f0ee201-f570-4414-9feb-616192dfca3b" containerName="cinder-db-sync" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.829776 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f9269a-2c20-4132-8f2c-c5e8c7493fc9" containerName="dnsmasq-dns" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.829811 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f0ee201-f570-4414-9feb-616192dfca3b" containerName="cinder-db-sync" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.831046 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.835328 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wwcwv" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.836992 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.837255 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.837444 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.856616 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.986049 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.986174 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.986203 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.986270 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-config-data\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.986522 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqzh7\" (UniqueName: \"kubernetes.io/projected/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-kube-api-access-gqzh7\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:56 crc kubenswrapper[4765]: I0121 13:23:56.986551 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-scripts\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.089290 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqzh7\" (UniqueName: \"kubernetes.io/projected/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-kube-api-access-gqzh7\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.089339 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-scripts\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.089461 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.089504 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.089529 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.089562 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-config-data\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.090654 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.102414 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-vcvq5"] Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.103849 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-scripts\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.104756 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.108229 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.127477 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-config-data\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.131938 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqzh7\" (UniqueName: \"kubernetes.io/projected/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-kube-api-access-gqzh7\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.132501 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-vcvq5"] Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.132760 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " pod="openstack/cinder-scheduler-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.198151 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.200197 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-config\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.200252 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.200278 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.200300 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.217826 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q6hg\" (UniqueName: \"kubernetes.io/projected/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-kube-api-access-9q6hg\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.217925 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-dns-svc\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.248368 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.262931 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.271708 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.276641 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.319642 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-config-data-custom\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.319684 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-scripts\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.319698 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4frrl\" (UniqueName: \"kubernetes.io/projected/bd6ec590-d60c-465a-ba30-838efade5720-kube-api-access-4frrl\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.319725 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-config-data\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.319761 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd6ec590-d60c-465a-ba30-838efade5720-logs\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.319788 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd6ec590-d60c-465a-ba30-838efade5720-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.319836 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-config\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.319858 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.319877 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.319892 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.319914 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.319952 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q6hg\" (UniqueName: \"kubernetes.io/projected/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-kube-api-access-9q6hg\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.319991 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-dns-svc\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.320802 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-dns-svc\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.321320 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-config\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.321806 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.326843 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.327563 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.357453 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q6hg\" (UniqueName: \"kubernetes.io/projected/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-kube-api-access-9q6hg\") pod \"dnsmasq-dns-6578955fd5-vcvq5\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.422750 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.422838 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-config-data-custom\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.422863 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-scripts\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.422883 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4frrl\" (UniqueName: \"kubernetes.io/projected/bd6ec590-d60c-465a-ba30-838efade5720-kube-api-access-4frrl\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.422914 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-config-data\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.422956 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd6ec590-d60c-465a-ba30-838efade5720-logs\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.422984 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd6ec590-d60c-465a-ba30-838efade5720-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.423078 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd6ec590-d60c-465a-ba30-838efade5720-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.426924 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd6ec590-d60c-465a-ba30-838efade5720-logs\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.431828 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-config-data-custom\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.432952 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.436673 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-scripts\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.452485 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-config-data\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.459876 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4frrl\" (UniqueName: \"kubernetes.io/projected/bd6ec590-d60c-465a-ba30-838efade5720-kube-api-access-4frrl\") pod \"cinder-api-0\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.563358 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.608749 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.853200 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75df64647b-fv9d5" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:37496->10.217.0.164:9311: read: connection reset by peer" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.853393 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75df64647b-fv9d5" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:37506->10.217.0.164:9311: read: connection reset by peer" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.853606 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-75df64647b-fv9d5" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:37486->10.217.0.164:9311: read: connection reset by peer" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.853671 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.853761 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75df64647b-fv9d5" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": dial tcp 10.217.0.164:9311: connect: connection refused" Jan 21 13:23:57 crc kubenswrapper[4765]: I0121 13:23:57.853843 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:58 crc kubenswrapper[4765]: E0121 13:23:58.585118 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 21 13:23:58 crc kubenswrapper[4765]: E0121 13:23:58.585589 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z57t9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78bb670d-da93-47aa-af39-981e6a9bff0f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 13:23:58 crc kubenswrapper[4765]: E0121 13:23:58.589131 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="78bb670d-da93-47aa-af39-981e6a9bff0f" Jan 21 13:23:58 crc kubenswrapper[4765]: I0121 13:23:58.823940 4765 generic.go:334] "Generic (PLEG): container finished" podID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerID="7dd5469de316831b449b93119b03a822a408d993a6d205fc477bbf0eb45c270e" exitCode=0 Jan 21 13:23:58 crc kubenswrapper[4765]: I0121 13:23:58.824049 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75df64647b-fv9d5" event={"ID":"c6322a13-8045-4ecb-bb13-6b249dbbc016","Type":"ContainerDied","Data":"7dd5469de316831b449b93119b03a822a408d993a6d205fc477bbf0eb45c270e"} Jan 21 13:23:58 crc kubenswrapper[4765]: I0121 13:23:58.836267 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78bb670d-da93-47aa-af39-981e6a9bff0f" containerName="sg-core" containerID="cri-o://52a73a97a1ecdfd1ac850c202d1c5dceca451c63ca1727f3dbdb20e40b76e014" gracePeriod=30 Jan 21 13:23:58 crc kubenswrapper[4765]: I0121 13:23:58.836592 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-858874fc56-6kgbs" event={"ID":"06ec9aac-7fc7-4070-bfc1-a23f1a27060a","Type":"ContainerDied","Data":"9e524b8e8c1d8438b0d306db66762339234756ca942cb5dc50a8a480a7216cf3"} Jan 21 13:23:58 crc kubenswrapper[4765]: I0121 13:23:58.836653 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e524b8e8c1d8438b0d306db66762339234756ca942cb5dc50a8a480a7216cf3" Jan 21 13:23:58 crc kubenswrapper[4765]: I0121 13:23:58.858592 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:58 crc kubenswrapper[4765]: I0121 13:23:58.966182 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frtt6\" (UniqueName: \"kubernetes.io/projected/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-kube-api-access-frtt6\") pod \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " Jan 21 13:23:58 crc kubenswrapper[4765]: I0121 13:23:58.966492 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-config\") pod \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " Jan 21 13:23:58 crc kubenswrapper[4765]: I0121 13:23:58.966647 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-combined-ca-bundle\") pod \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " Jan 21 13:23:58 crc kubenswrapper[4765]: I0121 13:23:58.966706 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-httpd-config\") pod \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " Jan 21 13:23:58 crc kubenswrapper[4765]: I0121 13:23:58.966779 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-ovndb-tls-certs\") pod \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\" (UID: \"06ec9aac-7fc7-4070-bfc1-a23f1a27060a\") " Jan 21 13:23:58 crc kubenswrapper[4765]: I0121 13:23:58.977846 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "06ec9aac-7fc7-4070-bfc1-a23f1a27060a" (UID: "06ec9aac-7fc7-4070-bfc1-a23f1a27060a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.018069 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-kube-api-access-frtt6" (OuterVolumeSpecName: "kube-api-access-frtt6") pod "06ec9aac-7fc7-4070-bfc1-a23f1a27060a" (UID: "06ec9aac-7fc7-4070-bfc1-a23f1a27060a"). InnerVolumeSpecName "kube-api-access-frtt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.071111 4765 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.071151 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frtt6\" (UniqueName: \"kubernetes.io/projected/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-kube-api-access-frtt6\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.194602 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06ec9aac-7fc7-4070-bfc1-a23f1a27060a" (UID: "06ec9aac-7fc7-4070-bfc1-a23f1a27060a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.199770 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-config" (OuterVolumeSpecName: "config") pod "06ec9aac-7fc7-4070-bfc1-a23f1a27060a" (UID: "06ec9aac-7fc7-4070-bfc1-a23f1a27060a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.225441 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "06ec9aac-7fc7-4070-bfc1-a23f1a27060a" (UID: "06ec9aac-7fc7-4070-bfc1-a23f1a27060a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.276509 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.276545 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.276556 4765 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/06ec9aac-7fc7-4070-bfc1-a23f1a27060a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.282703 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.379158 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-config-data-custom\") pod \"c6322a13-8045-4ecb-bb13-6b249dbbc016\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.379283 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-combined-ca-bundle\") pod \"c6322a13-8045-4ecb-bb13-6b249dbbc016\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.379333 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-config-data\") pod \"c6322a13-8045-4ecb-bb13-6b249dbbc016\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.379395 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhd9r\" (UniqueName: \"kubernetes.io/projected/c6322a13-8045-4ecb-bb13-6b249dbbc016-kube-api-access-rhd9r\") pod \"c6322a13-8045-4ecb-bb13-6b249dbbc016\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.379441 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6322a13-8045-4ecb-bb13-6b249dbbc016-logs\") pod \"c6322a13-8045-4ecb-bb13-6b249dbbc016\" (UID: \"c6322a13-8045-4ecb-bb13-6b249dbbc016\") " Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.385751 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c6322a13-8045-4ecb-bb13-6b249dbbc016" (UID: "c6322a13-8045-4ecb-bb13-6b249dbbc016"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.386297 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6322a13-8045-4ecb-bb13-6b249dbbc016-logs" (OuterVolumeSpecName: "logs") pod "c6322a13-8045-4ecb-bb13-6b249dbbc016" (UID: "c6322a13-8045-4ecb-bb13-6b249dbbc016"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.402605 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6322a13-8045-4ecb-bb13-6b249dbbc016-kube-api-access-rhd9r" (OuterVolumeSpecName: "kube-api-access-rhd9r") pod "c6322a13-8045-4ecb-bb13-6b249dbbc016" (UID: "c6322a13-8045-4ecb-bb13-6b249dbbc016"). InnerVolumeSpecName "kube-api-access-rhd9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.481697 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6322a13-8045-4ecb-bb13-6b249dbbc016" (UID: "c6322a13-8045-4ecb-bb13-6b249dbbc016"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.483199 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.483264 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhd9r\" (UniqueName: \"kubernetes.io/projected/c6322a13-8045-4ecb-bb13-6b249dbbc016-kube-api-access-rhd9r\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.483275 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6322a13-8045-4ecb-bb13-6b249dbbc016-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.483286 4765 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.498464 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-config-data" (OuterVolumeSpecName: "config-data") pod "c6322a13-8045-4ecb-bb13-6b249dbbc016" (UID: "c6322a13-8045-4ecb-bb13-6b249dbbc016"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.594499 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6322a13-8045-4ecb-bb13-6b249dbbc016-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.863508 4765 generic.go:334] "Generic (PLEG): container finished" podID="78bb670d-da93-47aa-af39-981e6a9bff0f" containerID="52a73a97a1ecdfd1ac850c202d1c5dceca451c63ca1727f3dbdb20e40b76e014" exitCode=2 Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.863853 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78bb670d-da93-47aa-af39-981e6a9bff0f","Type":"ContainerDied","Data":"52a73a97a1ecdfd1ac850c202d1c5dceca451c63ca1727f3dbdb20e40b76e014"} Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.863885 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78bb670d-da93-47aa-af39-981e6a9bff0f","Type":"ContainerDied","Data":"27878fb9ef3b43c2faa8b9d076a6722e1831790d203a369cf79cce3e50aaf1fa"} Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.863896 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27878fb9ef3b43c2faa8b9d076a6722e1831790d203a369cf79cce3e50aaf1fa" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.891462 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-858874fc56-6kgbs" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.892271 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75df64647b-fv9d5" event={"ID":"c6322a13-8045-4ecb-bb13-6b249dbbc016","Type":"ContainerDied","Data":"ca007329e3ecc07575fde1b2de9541105a035ffe127b41696ba8ff98f1d0e0d5"} Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.892305 4765 scope.go:117] "RemoveContainer" containerID="7dd5469de316831b449b93119b03a822a408d993a6d205fc477bbf0eb45c270e" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.892319 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75df64647b-fv9d5" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.923697 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.989558 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:23:59 crc kubenswrapper[4765]: I0121 13:23:59.990567 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.009895 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z57t9\" (UniqueName: \"kubernetes.io/projected/78bb670d-da93-47aa-af39-981e6a9bff0f-kube-api-access-z57t9\") pod \"78bb670d-da93-47aa-af39-981e6a9bff0f\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.009982 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78bb670d-da93-47aa-af39-981e6a9bff0f-run-httpd\") pod \"78bb670d-da93-47aa-af39-981e6a9bff0f\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.010179 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-config-data\") pod \"78bb670d-da93-47aa-af39-981e6a9bff0f\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.010271 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-combined-ca-bundle\") pod \"78bb670d-da93-47aa-af39-981e6a9bff0f\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.010362 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-scripts\") pod \"78bb670d-da93-47aa-af39-981e6a9bff0f\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.010412 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78bb670d-da93-47aa-af39-981e6a9bff0f-log-httpd\") pod \"78bb670d-da93-47aa-af39-981e6a9bff0f\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.010436 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-sg-core-conf-yaml\") pod \"78bb670d-da93-47aa-af39-981e6a9bff0f\" (UID: \"78bb670d-da93-47aa-af39-981e6a9bff0f\") " Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.013727 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75df64647b-fv9d5"] Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.013811 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78bb670d-da93-47aa-af39-981e6a9bff0f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "78bb670d-da93-47aa-af39-981e6a9bff0f" (UID: "78bb670d-da93-47aa-af39-981e6a9bff0f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.025543 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78bb670d-da93-47aa-af39-981e6a9bff0f-kube-api-access-z57t9" (OuterVolumeSpecName: "kube-api-access-z57t9") pod "78bb670d-da93-47aa-af39-981e6a9bff0f" (UID: "78bb670d-da93-47aa-af39-981e6a9bff0f"). InnerVolumeSpecName "kube-api-access-z57t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.030731 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78bb670d-da93-47aa-af39-981e6a9bff0f" (UID: "78bb670d-da93-47aa-af39-981e6a9bff0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.030889 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-config-data" (OuterVolumeSpecName: "config-data") pod "78bb670d-da93-47aa-af39-981e6a9bff0f" (UID: "78bb670d-da93-47aa-af39-981e6a9bff0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.031658 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78bb670d-da93-47aa-af39-981e6a9bff0f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "78bb670d-da93-47aa-af39-981e6a9bff0f" (UID: "78bb670d-da93-47aa-af39-981e6a9bff0f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.040965 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-scripts" (OuterVolumeSpecName: "scripts") pod "78bb670d-da93-47aa-af39-981e6a9bff0f" (UID: "78bb670d-da93-47aa-af39-981e6a9bff0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.044389 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-75df64647b-fv9d5"] Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.057506 4765 scope.go:117] "RemoveContainer" containerID="d1dc232164a2476a3d98a1da36af9d0c60ccf029c9004224a65944d9a0016a87" Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.061160 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-858874fc56-6kgbs"] Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.067766 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "78bb670d-da93-47aa-af39-981e6a9bff0f" (UID: "78bb670d-da93-47aa-af39-981e6a9bff0f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.084581 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-vcvq5"] Jan 21 13:24:00 crc kubenswrapper[4765]: W0121 13:24:00.087546 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bd7ae01_c989_4e75_bc95_4c39a5fb8670.slice/crio-e295ad4b73e4ef9f3df6a779ec640aae31e7ab8144437228c60db847506fa294 WatchSource:0}: Error finding container e295ad4b73e4ef9f3df6a779ec640aae31e7ab8144437228c60db847506fa294: Status 404 returned error can't find the container with id e295ad4b73e4ef9f3df6a779ec640aae31e7ab8144437228c60db847506fa294 Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.109154 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-858874fc56-6kgbs"] Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.112889 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.112917 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.112927 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.112936 4765 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78bb670d-da93-47aa-af39-981e6a9bff0f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.112945 4765 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78bb670d-da93-47aa-af39-981e6a9bff0f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.112954 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z57t9\" (UniqueName: \"kubernetes.io/projected/78bb670d-da93-47aa-af39-981e6a9bff0f-kube-api-access-z57t9\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.112962 4765 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78bb670d-da93-47aa-af39-981e6a9bff0f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.461877 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.956626 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9902ccfe-ba6d-4b0d-a03e-a066c1da6379","Type":"ContainerStarted","Data":"b0b1e9c46dbce326adc2c337ebb27e396d35bb4e3c397ea6777139620cbf0a8b"} Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.995591 4765 generic.go:334] "Generic (PLEG): container finished" podID="0bd7ae01-c989-4e75-bc95-4c39a5fb8670" containerID="1947d7bb0442057606477c70c9ff3c80288f4e3c10938db0df6cbda3a5e6fe44" exitCode=0 Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.997888 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" event={"ID":"0bd7ae01-c989-4e75-bc95-4c39a5fb8670","Type":"ContainerDied","Data":"1947d7bb0442057606477c70c9ff3c80288f4e3c10938db0df6cbda3a5e6fe44"} Jan 21 13:24:00 crc kubenswrapper[4765]: I0121 13:24:00.997936 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" event={"ID":"0bd7ae01-c989-4e75-bc95-4c39a5fb8670","Type":"ContainerStarted","Data":"e295ad4b73e4ef9f3df6a779ec640aae31e7ab8144437228c60db847506fa294"} Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.002504 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bd6ec590-d60c-465a-ba30-838efade5720","Type":"ContainerStarted","Data":"a40fd45e3b0615222d776f54305893d772c1714474435e6b4c2f031137b2ef77"} Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.006512 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.133996 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.152599 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.204064 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:01 crc kubenswrapper[4765]: E0121 13:24:01.204583 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api-log" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.204601 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api-log" Jan 21 13:24:01 crc kubenswrapper[4765]: E0121 13:24:01.204614 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ec9aac-7fc7-4070-bfc1-a23f1a27060a" containerName="neutron-api" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.204622 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ec9aac-7fc7-4070-bfc1-a23f1a27060a" containerName="neutron-api" Jan 21 13:24:01 crc kubenswrapper[4765]: E0121 13:24:01.204653 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78bb670d-da93-47aa-af39-981e6a9bff0f" containerName="sg-core" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.204662 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="78bb670d-da93-47aa-af39-981e6a9bff0f" containerName="sg-core" Jan 21 13:24:01 crc kubenswrapper[4765]: E0121 13:24:01.204678 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.204686 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api" Jan 21 13:24:01 crc kubenswrapper[4765]: E0121 13:24:01.204700 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ec9aac-7fc7-4070-bfc1-a23f1a27060a" containerName="neutron-httpd" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.204707 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ec9aac-7fc7-4070-bfc1-a23f1a27060a" containerName="neutron-httpd" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.204893 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.204909 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="78bb670d-da93-47aa-af39-981e6a9bff0f" containerName="sg-core" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.204921 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ec9aac-7fc7-4070-bfc1-a23f1a27060a" containerName="neutron-api" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.204937 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" containerName="barbican-api-log" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.204947 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ec9aac-7fc7-4070-bfc1-a23f1a27060a" containerName="neutron-httpd" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.209131 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.223674 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.223972 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.256037 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.289459 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-config-data\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.289728 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.289861 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06aadd94-84cf-40f0-887f-24cadd8876d0-log-httpd\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.289972 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-scripts\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.290128 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff2f6\" (UniqueName: \"kubernetes.io/projected/06aadd94-84cf-40f0-887f-24cadd8876d0-kube-api-access-ff2f6\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.290260 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.290422 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06aadd94-84cf-40f0-887f-24cadd8876d0-run-httpd\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.392320 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-config-data\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.392367 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.392393 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06aadd94-84cf-40f0-887f-24cadd8876d0-log-httpd\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.392423 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-scripts\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.392472 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff2f6\" (UniqueName: \"kubernetes.io/projected/06aadd94-84cf-40f0-887f-24cadd8876d0-kube-api-access-ff2f6\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.392503 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.392541 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06aadd94-84cf-40f0-887f-24cadd8876d0-run-httpd\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.396327 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.397478 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06aadd94-84cf-40f0-887f-24cadd8876d0-run-httpd\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.400503 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06aadd94-84cf-40f0-887f-24cadd8876d0-log-httpd\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.403798 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-scripts\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.404804 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-config-data\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.411840 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff2f6\" (UniqueName: \"kubernetes.io/projected/06aadd94-84cf-40f0-887f-24cadd8876d0-kube-api-access-ff2f6\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.413461 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " pod="openstack/ceilometer-0" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.632773 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ec9aac-7fc7-4070-bfc1-a23f1a27060a" path="/var/lib/kubelet/pods/06ec9aac-7fc7-4070-bfc1-a23f1a27060a/volumes" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.633819 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78bb670d-da93-47aa-af39-981e6a9bff0f" path="/var/lib/kubelet/pods/78bb670d-da93-47aa-af39-981e6a9bff0f/volumes" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.634418 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6322a13-8045-4ecb-bb13-6b249dbbc016" path="/var/lib/kubelet/pods/c6322a13-8045-4ecb-bb13-6b249dbbc016/volumes" Jan 21 13:24:01 crc kubenswrapper[4765]: I0121 13:24:01.640857 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:24:02 crc kubenswrapper[4765]: I0121 13:24:02.072814 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" event={"ID":"0bd7ae01-c989-4e75-bc95-4c39a5fb8670","Type":"ContainerStarted","Data":"08731cc17f14fe9a4ba2e3add17742e2a164db0b0de19888cd0bc7bc1a7e34c3"} Jan 21 13:24:02 crc kubenswrapper[4765]: I0121 13:24:02.074992 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:24:02 crc kubenswrapper[4765]: I0121 13:24:02.088799 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bd6ec590-d60c-465a-ba30-838efade5720","Type":"ContainerStarted","Data":"02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd"} Jan 21 13:24:02 crc kubenswrapper[4765]: I0121 13:24:02.110201 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" podStartSLOduration=5.110176974 podStartE2EDuration="5.110176974s" podCreationTimestamp="2026-01-21 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:24:02.093388884 +0000 UTC m=+1303.111114706" watchObservedRunningTime="2026-01-21 13:24:02.110176974 +0000 UTC m=+1303.127902796" Jan 21 13:24:02 crc kubenswrapper[4765]: I0121 13:24:02.442189 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:02 crc kubenswrapper[4765]: W0121 13:24:02.461795 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06aadd94_84cf_40f0_887f_24cadd8876d0.slice/crio-94936eb38b430632dcd246291c9edc39c9bffb043a8c53a63a0f5c56a7aa684e WatchSource:0}: Error finding container 94936eb38b430632dcd246291c9edc39c9bffb043a8c53a63a0f5c56a7aa684e: Status 404 returned error can't find the container with id 94936eb38b430632dcd246291c9edc39c9bffb043a8c53a63a0f5c56a7aa684e Jan 21 13:24:03 crc kubenswrapper[4765]: I0121 13:24:03.114285 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="bd6ec590-d60c-465a-ba30-838efade5720" containerName="cinder-api-log" containerID="cri-o://02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd" gracePeriod=30 Jan 21 13:24:03 crc kubenswrapper[4765]: I0121 13:24:03.114618 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bd6ec590-d60c-465a-ba30-838efade5720","Type":"ContainerStarted","Data":"a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896"} Jan 21 13:24:03 crc kubenswrapper[4765]: I0121 13:24:03.114663 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 13:24:03 crc kubenswrapper[4765]: I0121 13:24:03.115066 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="bd6ec590-d60c-465a-ba30-838efade5720" containerName="cinder-api" containerID="cri-o://a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896" gracePeriod=30 Jan 21 13:24:03 crc kubenswrapper[4765]: I0121 13:24:03.121121 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06aadd94-84cf-40f0-887f-24cadd8876d0","Type":"ContainerStarted","Data":"94936eb38b430632dcd246291c9edc39c9bffb043a8c53a63a0f5c56a7aa684e"} Jan 21 13:24:03 crc kubenswrapper[4765]: I0121 13:24:03.155075 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9902ccfe-ba6d-4b0d-a03e-a066c1da6379","Type":"ContainerStarted","Data":"a434d50e63d966f15f3f9d12e1901082a58ba411e27069d64143fd32ed4f676d"} Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.064558 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.166970 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9902ccfe-ba6d-4b0d-a03e-a066c1da6379","Type":"ContainerStarted","Data":"e964cd9b7f60b62985a8a40b4f603c6c93489de047464c8bfbcf74ab375840d7"} Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.169840 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd6ec590-d60c-465a-ba30-838efade5720-logs\") pod \"bd6ec590-d60c-465a-ba30-838efade5720\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.169921 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd6ec590-d60c-465a-ba30-838efade5720-etc-machine-id\") pod \"bd6ec590-d60c-465a-ba30-838efade5720\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.170021 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-scripts\") pod \"bd6ec590-d60c-465a-ba30-838efade5720\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.170003 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd6ec590-d60c-465a-ba30-838efade5720-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bd6ec590-d60c-465a-ba30-838efade5720" (UID: "bd6ec590-d60c-465a-ba30-838efade5720"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.170052 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-config-data\") pod \"bd6ec590-d60c-465a-ba30-838efade5720\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.170107 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-combined-ca-bundle\") pod \"bd6ec590-d60c-465a-ba30-838efade5720\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.170135 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4frrl\" (UniqueName: \"kubernetes.io/projected/bd6ec590-d60c-465a-ba30-838efade5720-kube-api-access-4frrl\") pod \"bd6ec590-d60c-465a-ba30-838efade5720\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.170310 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd6ec590-d60c-465a-ba30-838efade5720-logs" (OuterVolumeSpecName: "logs") pod "bd6ec590-d60c-465a-ba30-838efade5720" (UID: "bd6ec590-d60c-465a-ba30-838efade5720"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.170626 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-config-data-custom\") pod \"bd6ec590-d60c-465a-ba30-838efade5720\" (UID: \"bd6ec590-d60c-465a-ba30-838efade5720\") " Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.170973 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd6ec590-d60c-465a-ba30-838efade5720-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.170989 4765 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd6ec590-d60c-465a-ba30-838efade5720-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.173299 4765 generic.go:334] "Generic (PLEG): container finished" podID="bd6ec590-d60c-465a-ba30-838efade5720" containerID="a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896" exitCode=0 Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.173320 4765 generic.go:334] "Generic (PLEG): container finished" podID="bd6ec590-d60c-465a-ba30-838efade5720" containerID="02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd" exitCode=143 Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.173389 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.173614 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bd6ec590-d60c-465a-ba30-838efade5720","Type":"ContainerDied","Data":"a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896"} Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.173639 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bd6ec590-d60c-465a-ba30-838efade5720","Type":"ContainerDied","Data":"02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd"} Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.173649 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bd6ec590-d60c-465a-ba30-838efade5720","Type":"ContainerDied","Data":"a40fd45e3b0615222d776f54305893d772c1714474435e6b4c2f031137b2ef77"} Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.173663 4765 scope.go:117] "RemoveContainer" containerID="a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.179605 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-scripts" (OuterVolumeSpecName: "scripts") pod "bd6ec590-d60c-465a-ba30-838efade5720" (UID: "bd6ec590-d60c-465a-ba30-838efade5720"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.185158 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd6ec590-d60c-465a-ba30-838efade5720-kube-api-access-4frrl" (OuterVolumeSpecName: "kube-api-access-4frrl") pod "bd6ec590-d60c-465a-ba30-838efade5720" (UID: "bd6ec590-d60c-465a-ba30-838efade5720"). InnerVolumeSpecName "kube-api-access-4frrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.185228 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06aadd94-84cf-40f0-887f-24cadd8876d0","Type":"ContainerStarted","Data":"c319a7e069c7ff590c53b78a4a3b85dba69e27fc7d429354222090f637cef19a"} Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.185255 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06aadd94-84cf-40f0-887f-24cadd8876d0","Type":"ContainerStarted","Data":"c08ee816e2d56a9e73dee70f0f0dd364a6e9cac0f18755929529e40e60842b6e"} Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.185484 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bd6ec590-d60c-465a-ba30-838efade5720" (UID: "bd6ec590-d60c-465a-ba30-838efade5720"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.200827 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.071898985 podStartE2EDuration="8.200803262s" podCreationTimestamp="2026-01-21 13:23:56 +0000 UTC" firstStartedPulling="2026-01-21 13:24:00.057512684 +0000 UTC m=+1301.075238506" lastFinishedPulling="2026-01-21 13:24:01.186416961 +0000 UTC m=+1302.204142783" observedRunningTime="2026-01-21 13:24:04.191592794 +0000 UTC m=+1305.209318616" watchObservedRunningTime="2026-01-21 13:24:04.200803262 +0000 UTC m=+1305.218529084" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.223440 4765 scope.go:117] "RemoveContainer" containerID="02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.223608 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd6ec590-d60c-465a-ba30-838efade5720" (UID: "bd6ec590-d60c-465a-ba30-838efade5720"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.265719 4765 scope.go:117] "RemoveContainer" containerID="a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896" Jan 21 13:24:04 crc kubenswrapper[4765]: E0121 13:24:04.266405 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896\": container with ID starting with a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896 not found: ID does not exist" containerID="a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.266445 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896"} err="failed to get container status \"a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896\": rpc error: code = NotFound desc = could not find container \"a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896\": container with ID starting with a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896 not found: ID does not exist" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.266471 4765 scope.go:117] "RemoveContainer" containerID="02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.267086 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-config-data" (OuterVolumeSpecName: "config-data") pod "bd6ec590-d60c-465a-ba30-838efade5720" (UID: "bd6ec590-d60c-465a-ba30-838efade5720"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:04 crc kubenswrapper[4765]: E0121 13:24:04.268119 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd\": container with ID starting with 02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd not found: ID does not exist" containerID="02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.268144 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd"} err="failed to get container status \"02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd\": rpc error: code = NotFound desc = could not find container \"02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd\": container with ID starting with 02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd not found: ID does not exist" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.268163 4765 scope.go:117] "RemoveContainer" containerID="a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.274709 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896"} err="failed to get container status \"a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896\": rpc error: code = NotFound desc = could not find container \"a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896\": container with ID starting with a8470eb17910130a54308c88db1a6d1c20ef3e6415d23b768411903d20774896 not found: ID does not exist" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.274754 4765 scope.go:117] "RemoveContainer" containerID="02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.275023 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd"} err="failed to get container status \"02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd\": rpc error: code = NotFound desc = could not find container \"02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd\": container with ID starting with 02055940f89da994d55069db2c26e1e48821d8c2fd67672dce2f2b3fc48b7bcd not found: ID does not exist" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.277051 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.277083 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.277095 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.277108 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4frrl\" (UniqueName: \"kubernetes.io/projected/bd6ec590-d60c-465a-ba30-838efade5720-kube-api-access-4frrl\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.277122 4765 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd6ec590-d60c-465a-ba30-838efade5720-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.514313 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.523338 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.573146 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:24:04 crc kubenswrapper[4765]: E0121 13:24:04.573870 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd6ec590-d60c-465a-ba30-838efade5720" containerName="cinder-api-log" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.573887 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd6ec590-d60c-465a-ba30-838efade5720" containerName="cinder-api-log" Jan 21 13:24:04 crc kubenswrapper[4765]: E0121 13:24:04.573901 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd6ec590-d60c-465a-ba30-838efade5720" containerName="cinder-api" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.573907 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd6ec590-d60c-465a-ba30-838efade5720" containerName="cinder-api" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.574084 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd6ec590-d60c-465a-ba30-838efade5720" containerName="cinder-api-log" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.574100 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd6ec590-d60c-465a-ba30-838efade5720" containerName="cinder-api" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.575091 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.581507 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.581713 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.582390 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.632266 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.689195 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-config-data\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.689277 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.689318 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-config-data-custom\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.689648 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.689715 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-logs\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.689833 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-scripts\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.689900 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc2ff\" (UniqueName: \"kubernetes.io/projected/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-kube-api-access-nc2ff\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.690058 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.690145 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.793144 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.793232 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.793257 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-config-data\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.793320 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.793355 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-config-data-custom\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.793399 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.793440 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-logs\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.793529 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-scripts\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.793566 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc2ff\" (UniqueName: \"kubernetes.io/projected/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-kube-api-access-nc2ff\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.793890 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.794116 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-logs\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.799398 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.802939 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-config-data\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.804675 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-config-data-custom\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.805022 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.805448 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.806969 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-scripts\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:04 crc kubenswrapper[4765]: I0121 13:24:04.828534 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc2ff\" (UniqueName: \"kubernetes.io/projected/ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264-kube-api-access-nc2ff\") pod \"cinder-api-0\" (UID: \"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264\") " pod="openstack/cinder-api-0" Jan 21 13:24:05 crc kubenswrapper[4765]: I0121 13:24:05.023558 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 13:24:05 crc kubenswrapper[4765]: I0121 13:24:05.207849 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06aadd94-84cf-40f0-887f-24cadd8876d0","Type":"ContainerStarted","Data":"a1d8dd1c4966a38df6f9bf6da440ea459aaa75a29b38dd97b8dc225e649c5073"} Jan 21 13:24:05 crc kubenswrapper[4765]: I0121 13:24:05.634009 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd6ec590-d60c-465a-ba30-838efade5720" path="/var/lib/kubelet/pods/bd6ec590-d60c-465a-ba30-838efade5720/volumes" Jan 21 13:24:05 crc kubenswrapper[4765]: I0121 13:24:05.634977 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 13:24:06 crc kubenswrapper[4765]: I0121 13:24:06.227118 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264","Type":"ContainerStarted","Data":"2eaaeb3f7a474dd2a6eeca0e9b3e54769c3982e430e1f3cacff47fa38cf3935d"} Jan 21 13:24:06 crc kubenswrapper[4765]: I0121 13:24:06.814860 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:24:06 crc kubenswrapper[4765]: I0121 13:24:06.825271 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-86cbcc788d-b897j" Jan 21 13:24:07 crc kubenswrapper[4765]: I0121 13:24:07.201663 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 13:24:07 crc kubenswrapper[4765]: I0121 13:24:07.203910 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="9902ccfe-ba6d-4b0d-a03e-a066c1da6379" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.166:8080/\": dial tcp 10.217.0.166:8080: connect: connection refused" Jan 21 13:24:07 crc kubenswrapper[4765]: I0121 13:24:07.280269 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264","Type":"ContainerStarted","Data":"f1d693e73703a37cdeffe829de2210ab725083c0d5fe3802f5ed5d4cee24d506"} Jan 21 13:24:07 crc kubenswrapper[4765]: I0121 13:24:07.565392 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:24:07 crc kubenswrapper[4765]: I0121 13:24:07.644592 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-gcftg"] Jan 21 13:24:07 crc kubenswrapper[4765]: I0121 13:24:07.644830 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" podUID="75275df0-97ad-49b4-ac22-558bb6b29857" containerName="dnsmasq-dns" containerID="cri-o://f824fc8b37baf987c95380beb2b679b54e84b2154bb3b3a6bd1202d3b35635f6" gracePeriod=10 Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.294075 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06aadd94-84cf-40f0-887f-24cadd8876d0","Type":"ContainerStarted","Data":"421ba147a644aefbaddaf933ee3e555acaa53dca4b782df36d1e699b8d6b0839"} Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.294530 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.296811 4765 generic.go:334] "Generic (PLEG): container finished" podID="75275df0-97ad-49b4-ac22-558bb6b29857" containerID="f824fc8b37baf987c95380beb2b679b54e84b2154bb3b3a6bd1202d3b35635f6" exitCode=0 Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.296897 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" event={"ID":"75275df0-97ad-49b4-ac22-558bb6b29857","Type":"ContainerDied","Data":"f824fc8b37baf987c95380beb2b679b54e84b2154bb3b3a6bd1202d3b35635f6"} Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.296947 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" event={"ID":"75275df0-97ad-49b4-ac22-558bb6b29857","Type":"ContainerDied","Data":"a95653bc49d55bcd750e5c8d3150e27665c9822c1e52783ec972c3e2c0ecdfa0"} Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.296960 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a95653bc49d55bcd750e5c8d3150e27665c9822c1e52783ec972c3e2c0ecdfa0" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.300400 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264","Type":"ContainerStarted","Data":"59e59d421fd063b0621b160578f09c01ceb06f20c5124d69a4412afc34b491f0"} Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.300967 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.346127 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.349405877 podStartE2EDuration="7.346106499s" podCreationTimestamp="2026-01-21 13:24:01 +0000 UTC" firstStartedPulling="2026-01-21 13:24:02.472279296 +0000 UTC m=+1303.490005118" lastFinishedPulling="2026-01-21 13:24:06.468979918 +0000 UTC m=+1307.486705740" observedRunningTime="2026-01-21 13:24:08.335958953 +0000 UTC m=+1309.353684775" watchObservedRunningTime="2026-01-21 13:24:08.346106499 +0000 UTC m=+1309.363832331" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.383003 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.389599 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.389580237 podStartE2EDuration="4.389580237s" podCreationTimestamp="2026-01-21 13:24:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:24:08.370734457 +0000 UTC m=+1309.388460289" watchObservedRunningTime="2026-01-21 13:24:08.389580237 +0000 UTC m=+1309.407306059" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.527458 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-ovsdbserver-sb\") pod \"75275df0-97ad-49b4-ac22-558bb6b29857\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.527526 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-config\") pod \"75275df0-97ad-49b4-ac22-558bb6b29857\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.527646 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g5c4\" (UniqueName: \"kubernetes.io/projected/75275df0-97ad-49b4-ac22-558bb6b29857-kube-api-access-7g5c4\") pod \"75275df0-97ad-49b4-ac22-558bb6b29857\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.527714 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-dns-svc\") pod \"75275df0-97ad-49b4-ac22-558bb6b29857\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.527834 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-ovsdbserver-nb\") pod \"75275df0-97ad-49b4-ac22-558bb6b29857\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.527963 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-dns-swift-storage-0\") pod \"75275df0-97ad-49b4-ac22-558bb6b29857\" (UID: \"75275df0-97ad-49b4-ac22-558bb6b29857\") " Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.562546 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75275df0-97ad-49b4-ac22-558bb6b29857-kube-api-access-7g5c4" (OuterVolumeSpecName: "kube-api-access-7g5c4") pod "75275df0-97ad-49b4-ac22-558bb6b29857" (UID: "75275df0-97ad-49b4-ac22-558bb6b29857"). InnerVolumeSpecName "kube-api-access-7g5c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.630591 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g5c4\" (UniqueName: \"kubernetes.io/projected/75275df0-97ad-49b4-ac22-558bb6b29857-kube-api-access-7g5c4\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.631241 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "75275df0-97ad-49b4-ac22-558bb6b29857" (UID: "75275df0-97ad-49b4-ac22-558bb6b29857"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.642434 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "75275df0-97ad-49b4-ac22-558bb6b29857" (UID: "75275df0-97ad-49b4-ac22-558bb6b29857"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.650301 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75275df0-97ad-49b4-ac22-558bb6b29857" (UID: "75275df0-97ad-49b4-ac22-558bb6b29857"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.662820 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-config" (OuterVolumeSpecName: "config") pod "75275df0-97ad-49b4-ac22-558bb6b29857" (UID: "75275df0-97ad-49b4-ac22-558bb6b29857"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.667520 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "75275df0-97ad-49b4-ac22-558bb6b29857" (UID: "75275df0-97ad-49b4-ac22-558bb6b29857"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.733274 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.733333 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.733347 4765 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.733359 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:08 crc kubenswrapper[4765]: I0121 13:24:08.733372 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75275df0-97ad-49b4-ac22-558bb6b29857-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:09 crc kubenswrapper[4765]: I0121 13:24:09.167835 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7c5d9867cf-9ffzm" Jan 21 13:24:09 crc kubenswrapper[4765]: I0121 13:24:09.309564 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-gcftg" Jan 21 13:24:09 crc kubenswrapper[4765]: I0121 13:24:09.347472 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-gcftg"] Jan 21 13:24:09 crc kubenswrapper[4765]: I0121 13:24:09.356596 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-gcftg"] Jan 21 13:24:09 crc kubenswrapper[4765]: I0121 13:24:09.624957 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75275df0-97ad-49b4-ac22-558bb6b29857" path="/var/lib/kubelet/pods/75275df0-97ad-49b4-ac22-558bb6b29857/volumes" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.506957 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 13:24:11 crc kubenswrapper[4765]: E0121 13:24:11.507648 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75275df0-97ad-49b4-ac22-558bb6b29857" containerName="init" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.507661 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="75275df0-97ad-49b4-ac22-558bb6b29857" containerName="init" Jan 21 13:24:11 crc kubenswrapper[4765]: E0121 13:24:11.507675 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75275df0-97ad-49b4-ac22-558bb6b29857" containerName="dnsmasq-dns" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.507681 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="75275df0-97ad-49b4-ac22-558bb6b29857" containerName="dnsmasq-dns" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.507878 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="75275df0-97ad-49b4-ac22-558bb6b29857" containerName="dnsmasq-dns" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.508728 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.511608 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-4l56h" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.511613 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.511951 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.529913 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.606300 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/344fdbd2-c402-42e4-83d5-7e0bb3b978f6-openstack-config-secret\") pod \"openstackclient\" (UID: \"344fdbd2-c402-42e4-83d5-7e0bb3b978f6\") " pod="openstack/openstackclient" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.606389 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/344fdbd2-c402-42e4-83d5-7e0bb3b978f6-openstack-config\") pod \"openstackclient\" (UID: \"344fdbd2-c402-42e4-83d5-7e0bb3b978f6\") " pod="openstack/openstackclient" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.606719 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thbl4\" (UniqueName: \"kubernetes.io/projected/344fdbd2-c402-42e4-83d5-7e0bb3b978f6-kube-api-access-thbl4\") pod \"openstackclient\" (UID: \"344fdbd2-c402-42e4-83d5-7e0bb3b978f6\") " pod="openstack/openstackclient" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.606785 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/344fdbd2-c402-42e4-83d5-7e0bb3b978f6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"344fdbd2-c402-42e4-83d5-7e0bb3b978f6\") " pod="openstack/openstackclient" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.709068 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thbl4\" (UniqueName: \"kubernetes.io/projected/344fdbd2-c402-42e4-83d5-7e0bb3b978f6-kube-api-access-thbl4\") pod \"openstackclient\" (UID: \"344fdbd2-c402-42e4-83d5-7e0bb3b978f6\") " pod="openstack/openstackclient" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.709124 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/344fdbd2-c402-42e4-83d5-7e0bb3b978f6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"344fdbd2-c402-42e4-83d5-7e0bb3b978f6\") " pod="openstack/openstackclient" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.709243 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/344fdbd2-c402-42e4-83d5-7e0bb3b978f6-openstack-config-secret\") pod \"openstackclient\" (UID: \"344fdbd2-c402-42e4-83d5-7e0bb3b978f6\") " pod="openstack/openstackclient" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.709303 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/344fdbd2-c402-42e4-83d5-7e0bb3b978f6-openstack-config\") pod \"openstackclient\" (UID: \"344fdbd2-c402-42e4-83d5-7e0bb3b978f6\") " pod="openstack/openstackclient" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.710197 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/344fdbd2-c402-42e4-83d5-7e0bb3b978f6-openstack-config\") pod \"openstackclient\" (UID: \"344fdbd2-c402-42e4-83d5-7e0bb3b978f6\") " pod="openstack/openstackclient" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.715955 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/344fdbd2-c402-42e4-83d5-7e0bb3b978f6-openstack-config-secret\") pod \"openstackclient\" (UID: \"344fdbd2-c402-42e4-83d5-7e0bb3b978f6\") " pod="openstack/openstackclient" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.720741 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/344fdbd2-c402-42e4-83d5-7e0bb3b978f6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"344fdbd2-c402-42e4-83d5-7e0bb3b978f6\") " pod="openstack/openstackclient" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.734697 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thbl4\" (UniqueName: \"kubernetes.io/projected/344fdbd2-c402-42e4-83d5-7e0bb3b978f6-kube-api-access-thbl4\") pod \"openstackclient\" (UID: \"344fdbd2-c402-42e4-83d5-7e0bb3b978f6\") " pod="openstack/openstackclient" Jan 21 13:24:11 crc kubenswrapper[4765]: I0121 13:24:11.845036 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 13:24:12 crc kubenswrapper[4765]: I0121 13:24:12.411328 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 13:24:12 crc kubenswrapper[4765]: I0121 13:24:12.541079 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 13:24:12 crc kubenswrapper[4765]: I0121 13:24:12.615386 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:24:13 crc kubenswrapper[4765]: I0121 13:24:13.352136 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9902ccfe-ba6d-4b0d-a03e-a066c1da6379" containerName="cinder-scheduler" containerID="cri-o://a434d50e63d966f15f3f9d12e1901082a58ba411e27069d64143fd32ed4f676d" gracePeriod=30 Jan 21 13:24:13 crc kubenswrapper[4765]: I0121 13:24:13.352582 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"344fdbd2-c402-42e4-83d5-7e0bb3b978f6","Type":"ContainerStarted","Data":"501005dcef2cb72e30ed90eae7bfb56c89c0135bbc88a6a378c4e279112344ec"} Jan 21 13:24:13 crc kubenswrapper[4765]: I0121 13:24:13.353024 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9902ccfe-ba6d-4b0d-a03e-a066c1da6379" containerName="probe" containerID="cri-o://e964cd9b7f60b62985a8a40b4f603c6c93489de047464c8bfbcf74ab375840d7" gracePeriod=30 Jan 21 13:24:14 crc kubenswrapper[4765]: I0121 13:24:14.371278 4765 generic.go:334] "Generic (PLEG): container finished" podID="9902ccfe-ba6d-4b0d-a03e-a066c1da6379" containerID="e964cd9b7f60b62985a8a40b4f603c6c93489de047464c8bfbcf74ab375840d7" exitCode=0 Jan 21 13:24:14 crc kubenswrapper[4765]: I0121 13:24:14.371352 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9902ccfe-ba6d-4b0d-a03e-a066c1da6379","Type":"ContainerDied","Data":"e964cd9b7f60b62985a8a40b4f603c6c93489de047464c8bfbcf74ab375840d7"} Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.374835 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.491862 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrj4c\" (UniqueName: \"kubernetes.io/projected/be7431e9-c408-49e2-80b8-4d13da26f0ee-kube-api-access-mrj4c\") pod \"be7431e9-c408-49e2-80b8-4d13da26f0ee\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.492047 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-config-data\") pod \"be7431e9-c408-49e2-80b8-4d13da26f0ee\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.492183 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be7431e9-c408-49e2-80b8-4d13da26f0ee-logs\") pod \"be7431e9-c408-49e2-80b8-4d13da26f0ee\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.492375 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-combined-ca-bundle\") pod \"be7431e9-c408-49e2-80b8-4d13da26f0ee\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.492631 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-config-data-custom\") pod \"be7431e9-c408-49e2-80b8-4d13da26f0ee\" (UID: \"be7431e9-c408-49e2-80b8-4d13da26f0ee\") " Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.506556 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be7431e9-c408-49e2-80b8-4d13da26f0ee-logs" (OuterVolumeSpecName: "logs") pod "be7431e9-c408-49e2-80b8-4d13da26f0ee" (UID: "be7431e9-c408-49e2-80b8-4d13da26f0ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.604860 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be7431e9-c408-49e2-80b8-4d13da26f0ee-kube-api-access-mrj4c" (OuterVolumeSpecName: "kube-api-access-mrj4c") pod "be7431e9-c408-49e2-80b8-4d13da26f0ee" (UID: "be7431e9-c408-49e2-80b8-4d13da26f0ee"). InnerVolumeSpecName "kube-api-access-mrj4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.606996 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "be7431e9-c408-49e2-80b8-4d13da26f0ee" (UID: "be7431e9-c408-49e2-80b8-4d13da26f0ee"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.669727 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be7431e9-c408-49e2-80b8-4d13da26f0ee" (UID: "be7431e9-c408-49e2-80b8-4d13da26f0ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.719926 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be7431e9-c408-49e2-80b8-4d13da26f0ee-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.726730 4765 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.727295 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrj4c\" (UniqueName: \"kubernetes.io/projected/be7431e9-c408-49e2-80b8-4d13da26f0ee-kube-api-access-mrj4c\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.749364 4765 generic.go:334] "Generic (PLEG): container finished" podID="099ff49f-9143-4fa1-9844-cb66dc028aca" containerID="e77823b748ab01858ea881864999dee9c96060c6f517ef60c0f718e508b9a594" exitCode=137 Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.752780 4765 generic.go:334] "Generic (PLEG): container finished" podID="be7431e9-c408-49e2-80b8-4d13da26f0ee" containerID="c8334accf94bed406501ddfac62757bc2cb5cd307c267a320c0a1376f8cf1c9e" exitCode=137 Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.752876 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6f7f76cb7-rnmdt" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.762337 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" event={"ID":"099ff49f-9143-4fa1-9844-cb66dc028aca","Type":"ContainerDied","Data":"e77823b748ab01858ea881864999dee9c96060c6f517ef60c0f718e508b9a594"} Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.762382 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f7f76cb7-rnmdt" event={"ID":"be7431e9-c408-49e2-80b8-4d13da26f0ee","Type":"ContainerDied","Data":"c8334accf94bed406501ddfac62757bc2cb5cd307c267a320c0a1376f8cf1c9e"} Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.762407 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-c67b7f46c-vdfh2"] Jan 21 13:24:17 crc kubenswrapper[4765]: E0121 13:24:17.763098 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be7431e9-c408-49e2-80b8-4d13da26f0ee" containerName="barbican-worker" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.763116 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="be7431e9-c408-49e2-80b8-4d13da26f0ee" containerName="barbican-worker" Jan 21 13:24:17 crc kubenswrapper[4765]: E0121 13:24:17.763152 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be7431e9-c408-49e2-80b8-4d13da26f0ee" containerName="barbican-worker-log" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.763161 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="be7431e9-c408-49e2-80b8-4d13da26f0ee" containerName="barbican-worker-log" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.763601 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="be7431e9-c408-49e2-80b8-4d13da26f0ee" containerName="barbican-worker" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.763633 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="be7431e9-c408-49e2-80b8-4d13da26f0ee" containerName="barbican-worker-log" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.765504 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-c67b7f46c-vdfh2"] Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.765523 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6f7f76cb7-rnmdt" event={"ID":"be7431e9-c408-49e2-80b8-4d13da26f0ee","Type":"ContainerDied","Data":"acda5a870f88926f36af29f82545df661ff4c938f6ea5dc80fcd1374f17b6593"} Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.765600 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.766056 4765 scope.go:117] "RemoveContainer" containerID="c8334accf94bed406501ddfac62757bc2cb5cd307c267a320c0a1376f8cf1c9e" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.770618 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.770814 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.771196 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.811148 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-config-data" (OuterVolumeSpecName: "config-data") pod "be7431e9-c408-49e2-80b8-4d13da26f0ee" (UID: "be7431e9-c408-49e2-80b8-4d13da26f0ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.834995 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.835270 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be7431e9-c408-49e2-80b8-4d13da26f0ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.885723 4765 scope.go:117] "RemoveContainer" containerID="f6f24da14aa07819931402b685115b0ac9c3fbbf1ea8954ee43b5d6dda5db9f2" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.890416 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.939357 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-run-httpd\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.940604 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-log-httpd\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.940726 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kcpf\" (UniqueName: \"kubernetes.io/projected/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-kube-api-access-6kcpf\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.940900 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-public-tls-certs\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.940980 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-etc-swift\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.941006 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-combined-ca-bundle\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.941043 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-config-data\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.941186 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-internal-tls-certs\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.962749 4765 scope.go:117] "RemoveContainer" containerID="c8334accf94bed406501ddfac62757bc2cb5cd307c267a320c0a1376f8cf1c9e" Jan 21 13:24:17 crc kubenswrapper[4765]: E0121 13:24:17.965614 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8334accf94bed406501ddfac62757bc2cb5cd307c267a320c0a1376f8cf1c9e\": container with ID starting with c8334accf94bed406501ddfac62757bc2cb5cd307c267a320c0a1376f8cf1c9e not found: ID does not exist" containerID="c8334accf94bed406501ddfac62757bc2cb5cd307c267a320c0a1376f8cf1c9e" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.965885 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8334accf94bed406501ddfac62757bc2cb5cd307c267a320c0a1376f8cf1c9e"} err="failed to get container status \"c8334accf94bed406501ddfac62757bc2cb5cd307c267a320c0a1376f8cf1c9e\": rpc error: code = NotFound desc = could not find container \"c8334accf94bed406501ddfac62757bc2cb5cd307c267a320c0a1376f8cf1c9e\": container with ID starting with c8334accf94bed406501ddfac62757bc2cb5cd307c267a320c0a1376f8cf1c9e not found: ID does not exist" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.966069 4765 scope.go:117] "RemoveContainer" containerID="f6f24da14aa07819931402b685115b0ac9c3fbbf1ea8954ee43b5d6dda5db9f2" Jan 21 13:24:17 crc kubenswrapper[4765]: E0121 13:24:17.970588 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6f24da14aa07819931402b685115b0ac9c3fbbf1ea8954ee43b5d6dda5db9f2\": container with ID starting with f6f24da14aa07819931402b685115b0ac9c3fbbf1ea8954ee43b5d6dda5db9f2 not found: ID does not exist" containerID="f6f24da14aa07819931402b685115b0ac9c3fbbf1ea8954ee43b5d6dda5db9f2" Jan 21 13:24:17 crc kubenswrapper[4765]: I0121 13:24:17.970657 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6f24da14aa07819931402b685115b0ac9c3fbbf1ea8954ee43b5d6dda5db9f2"} err="failed to get container status \"f6f24da14aa07819931402b685115b0ac9c3fbbf1ea8954ee43b5d6dda5db9f2\": rpc error: code = NotFound desc = could not find container \"f6f24da14aa07819931402b685115b0ac9c3fbbf1ea8954ee43b5d6dda5db9f2\": container with ID starting with f6f24da14aa07819931402b685115b0ac9c3fbbf1ea8954ee43b5d6dda5db9f2 not found: ID does not exist" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.042619 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-config-data-custom\") pod \"099ff49f-9143-4fa1-9844-cb66dc028aca\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.042795 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmqfm\" (UniqueName: \"kubernetes.io/projected/099ff49f-9143-4fa1-9844-cb66dc028aca-kube-api-access-xmqfm\") pod \"099ff49f-9143-4fa1-9844-cb66dc028aca\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.042933 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-combined-ca-bundle\") pod \"099ff49f-9143-4fa1-9844-cb66dc028aca\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.043404 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-config-data\") pod \"099ff49f-9143-4fa1-9844-cb66dc028aca\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.043471 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/099ff49f-9143-4fa1-9844-cb66dc028aca-logs\") pod \"099ff49f-9143-4fa1-9844-cb66dc028aca\" (UID: \"099ff49f-9143-4fa1-9844-cb66dc028aca\") " Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.043766 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-internal-tls-certs\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.043803 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-run-httpd\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.043848 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-log-httpd\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.043888 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kcpf\" (UniqueName: \"kubernetes.io/projected/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-kube-api-access-6kcpf\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.043934 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-public-tls-certs\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.043975 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-etc-swift\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.043997 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-combined-ca-bundle\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.044022 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-config-data\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.045340 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-log-httpd\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.046076 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-run-httpd\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.046634 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/099ff49f-9143-4fa1-9844-cb66dc028aca-logs" (OuterVolumeSpecName: "logs") pod "099ff49f-9143-4fa1-9844-cb66dc028aca" (UID: "099ff49f-9143-4fa1-9844-cb66dc028aca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.050128 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "099ff49f-9143-4fa1-9844-cb66dc028aca" (UID: "099ff49f-9143-4fa1-9844-cb66dc028aca"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.056651 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-internal-tls-certs\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.059452 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/099ff49f-9143-4fa1-9844-cb66dc028aca-kube-api-access-xmqfm" (OuterVolumeSpecName: "kube-api-access-xmqfm") pod "099ff49f-9143-4fa1-9844-cb66dc028aca" (UID: "099ff49f-9143-4fa1-9844-cb66dc028aca"). InnerVolumeSpecName "kube-api-access-xmqfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.062432 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-etc-swift\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.069035 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-config-data\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.077590 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-combined-ca-bundle\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.084180 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kcpf\" (UniqueName: \"kubernetes.io/projected/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-kube-api-access-6kcpf\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.084615 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcc230e6-cf6d-4fc2-bea2-9ba2b028716b-public-tls-certs\") pod \"swift-proxy-c67b7f46c-vdfh2\" (UID: \"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b\") " pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.115564 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6f7f76cb7-rnmdt"] Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.125974 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-6f7f76cb7-rnmdt"] Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.130414 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "099ff49f-9143-4fa1-9844-cb66dc028aca" (UID: "099ff49f-9143-4fa1-9844-cb66dc028aca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.145503 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/099ff49f-9143-4fa1-9844-cb66dc028aca-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.145532 4765 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.145545 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmqfm\" (UniqueName: \"kubernetes.io/projected/099ff49f-9143-4fa1-9844-cb66dc028aca-kube-api-access-xmqfm\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.145555 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.152396 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-config-data" (OuterVolumeSpecName: "config-data") pod "099ff49f-9143-4fa1-9844-cb66dc028aca" (UID: "099ff49f-9143-4fa1-9844-cb66dc028aca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.179674 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.247625 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/099ff49f-9143-4fa1-9844-cb66dc028aca-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.815166 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" event={"ID":"099ff49f-9143-4fa1-9844-cb66dc028aca","Type":"ContainerDied","Data":"11bae7532db028342299c3a755cb94ec9bc9251c4f316e6bfdb0cd899d30d8ba"} Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.815725 4765 scope.go:117] "RemoveContainer" containerID="e77823b748ab01858ea881864999dee9c96060c6f517ef60c0f718e508b9a594" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.815833 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5dd9885f5b-xm6hz" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.843800 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-c67b7f46c-vdfh2"] Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.855018 4765 scope.go:117] "RemoveContainer" containerID="1fdf21e941ff05ff463d85795b7227c43d6220b563240ffe6c900ab82f07b728" Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.855594 4765 generic.go:334] "Generic (PLEG): container finished" podID="9902ccfe-ba6d-4b0d-a03e-a066c1da6379" containerID="a434d50e63d966f15f3f9d12e1901082a58ba411e27069d64143fd32ed4f676d" exitCode=0 Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.855764 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9902ccfe-ba6d-4b0d-a03e-a066c1da6379","Type":"ContainerDied","Data":"a434d50e63d966f15f3f9d12e1901082a58ba411e27069d64143fd32ed4f676d"} Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.875545 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5dd9885f5b-xm6hz"] Jan 21 13:24:18 crc kubenswrapper[4765]: I0121 13:24:18.901335 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5dd9885f5b-xm6hz"] Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.061080 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.168459 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-etc-machine-id\") pod \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.168544 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqzh7\" (UniqueName: \"kubernetes.io/projected/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-kube-api-access-gqzh7\") pod \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.168587 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-combined-ca-bundle\") pod \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.168717 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-scripts\") pod \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.168819 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-config-data\") pod \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.168846 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-config-data-custom\") pod \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\" (UID: \"9902ccfe-ba6d-4b0d-a03e-a066c1da6379\") " Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.170225 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9902ccfe-ba6d-4b0d-a03e-a066c1da6379" (UID: "9902ccfe-ba6d-4b0d-a03e-a066c1da6379"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.176134 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-scripts" (OuterVolumeSpecName: "scripts") pod "9902ccfe-ba6d-4b0d-a03e-a066c1da6379" (UID: "9902ccfe-ba6d-4b0d-a03e-a066c1da6379"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.176343 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9902ccfe-ba6d-4b0d-a03e-a066c1da6379" (UID: "9902ccfe-ba6d-4b0d-a03e-a066c1da6379"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.176530 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-kube-api-access-gqzh7" (OuterVolumeSpecName: "kube-api-access-gqzh7") pod "9902ccfe-ba6d-4b0d-a03e-a066c1da6379" (UID: "9902ccfe-ba6d-4b0d-a03e-a066c1da6379"). InnerVolumeSpecName "kube-api-access-gqzh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.261464 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9902ccfe-ba6d-4b0d-a03e-a066c1da6379" (UID: "9902ccfe-ba6d-4b0d-a03e-a066c1da6379"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.270669 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.270709 4765 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.270725 4765 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.270739 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqzh7\" (UniqueName: \"kubernetes.io/projected/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-kube-api-access-gqzh7\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.270752 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.284765 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.360328 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-config-data" (OuterVolumeSpecName: "config-data") pod "9902ccfe-ba6d-4b0d-a03e-a066c1da6379" (UID: "9902ccfe-ba6d-4b0d-a03e-a066c1da6379"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.373237 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9902ccfe-ba6d-4b0d-a03e-a066c1da6379-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.633454 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="099ff49f-9143-4fa1-9844-cb66dc028aca" path="/var/lib/kubelet/pods/099ff49f-9143-4fa1-9844-cb66dc028aca/volumes" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.634061 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be7431e9-c408-49e2-80b8-4d13da26f0ee" path="/var/lib/kubelet/pods/be7431e9-c408-49e2-80b8-4d13da26f0ee/volumes" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.872523 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-c67b7f46c-vdfh2" event={"ID":"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b","Type":"ContainerStarted","Data":"dbe7130a65cd9cc26cb620c8fd11e75358c864bd2075ccb902e45d10cbcb1da8"} Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.872858 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-c67b7f46c-vdfh2" event={"ID":"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b","Type":"ContainerStarted","Data":"0001ffeea2f86baa16ef098cb6bea3a4edd4d32b6af051778fd46e8b2e4e8626"} Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.879998 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9902ccfe-ba6d-4b0d-a03e-a066c1da6379","Type":"ContainerDied","Data":"b0b1e9c46dbce326adc2c337ebb27e396d35bb4e3c397ea6777139620cbf0a8b"} Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.880049 4765 scope.go:117] "RemoveContainer" containerID="e964cd9b7f60b62985a8a40b4f603c6c93489de047464c8bfbcf74ab375840d7" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.880229 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.943512 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.958282 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.978270 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:24:19 crc kubenswrapper[4765]: E0121 13:24:19.978897 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9902ccfe-ba6d-4b0d-a03e-a066c1da6379" containerName="cinder-scheduler" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.978910 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9902ccfe-ba6d-4b0d-a03e-a066c1da6379" containerName="cinder-scheduler" Jan 21 13:24:19 crc kubenswrapper[4765]: E0121 13:24:19.978929 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099ff49f-9143-4fa1-9844-cb66dc028aca" containerName="barbican-keystone-listener" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.978935 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="099ff49f-9143-4fa1-9844-cb66dc028aca" containerName="barbican-keystone-listener" Jan 21 13:24:19 crc kubenswrapper[4765]: E0121 13:24:19.978949 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="099ff49f-9143-4fa1-9844-cb66dc028aca" containerName="barbican-keystone-listener-log" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.978955 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="099ff49f-9143-4fa1-9844-cb66dc028aca" containerName="barbican-keystone-listener-log" Jan 21 13:24:19 crc kubenswrapper[4765]: E0121 13:24:19.978973 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9902ccfe-ba6d-4b0d-a03e-a066c1da6379" containerName="probe" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.978979 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9902ccfe-ba6d-4b0d-a03e-a066c1da6379" containerName="probe" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.979141 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9902ccfe-ba6d-4b0d-a03e-a066c1da6379" containerName="probe" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.979163 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9902ccfe-ba6d-4b0d-a03e-a066c1da6379" containerName="cinder-scheduler" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.979173 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="099ff49f-9143-4fa1-9844-cb66dc028aca" containerName="barbican-keystone-listener" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.979181 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="099ff49f-9143-4fa1-9844-cb66dc028aca" containerName="barbican-keystone-listener-log" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.980700 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.987692 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 13:24:19 crc kubenswrapper[4765]: I0121 13:24:19.991873 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.012142 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.012241 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.012277 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.012302 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.012345 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thcjm\" (UniqueName: \"kubernetes.io/projected/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-kube-api-access-thcjm\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.012381 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.114936 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.115073 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.115134 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.115176 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.115221 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.115261 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thcjm\" (UniqueName: \"kubernetes.io/projected/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-kube-api-access-thcjm\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.120831 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.131180 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.131573 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.132020 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.139746 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.139900 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thcjm\" (UniqueName: \"kubernetes.io/projected/9d8e00dc-cddb-4ae9-a128-684e2ca459f7-kube-api-access-thcjm\") pod \"cinder-scheduler-0\" (UID: \"9d8e00dc-cddb-4ae9-a128-684e2ca459f7\") " pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.310030 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.736169 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.736749 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="ceilometer-central-agent" containerID="cri-o://c08ee816e2d56a9e73dee70f0f0dd364a6e9cac0f18755929529e40e60842b6e" gracePeriod=30 Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.737540 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="sg-core" containerID="cri-o://a1d8dd1c4966a38df6f9bf6da440ea459aaa75a29b38dd97b8dc225e649c5073" gracePeriod=30 Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.737674 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="proxy-httpd" containerID="cri-o://421ba147a644aefbaddaf933ee3e555acaa53dca4b782df36d1e699b8d6b0839" gracePeriod=30 Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.737695 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="ceilometer-notification-agent" containerID="cri-o://c319a7e069c7ff590c53b78a4a3b85dba69e27fc7d429354222090f637cef19a" gracePeriod=30 Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.753668 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.169:3000/\": EOF" Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.896079 4765 generic.go:334] "Generic (PLEG): container finished" podID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerID="a1d8dd1c4966a38df6f9bf6da440ea459aaa75a29b38dd97b8dc225e649c5073" exitCode=2 Jan 21 13:24:20 crc kubenswrapper[4765]: I0121 13:24:20.896120 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06aadd94-84cf-40f0-887f-24cadd8876d0","Type":"ContainerDied","Data":"a1d8dd1c4966a38df6f9bf6da440ea459aaa75a29b38dd97b8dc225e649c5073"} Jan 21 13:24:21 crc kubenswrapper[4765]: I0121 13:24:21.628968 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9902ccfe-ba6d-4b0d-a03e-a066c1da6379" path="/var/lib/kubelet/pods/9902ccfe-ba6d-4b0d-a03e-a066c1da6379/volumes" Jan 21 13:24:21 crc kubenswrapper[4765]: I0121 13:24:21.916594 4765 generic.go:334] "Generic (PLEG): container finished" podID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerID="421ba147a644aefbaddaf933ee3e555acaa53dca4b782df36d1e699b8d6b0839" exitCode=0 Jan 21 13:24:21 crc kubenswrapper[4765]: I0121 13:24:21.916910 4765 generic.go:334] "Generic (PLEG): container finished" podID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerID="c08ee816e2d56a9e73dee70f0f0dd364a6e9cac0f18755929529e40e60842b6e" exitCode=0 Jan 21 13:24:21 crc kubenswrapper[4765]: I0121 13:24:21.916672 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06aadd94-84cf-40f0-887f-24cadd8876d0","Type":"ContainerDied","Data":"421ba147a644aefbaddaf933ee3e555acaa53dca4b782df36d1e699b8d6b0839"} Jan 21 13:24:21 crc kubenswrapper[4765]: I0121 13:24:21.916956 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06aadd94-84cf-40f0-887f-24cadd8876d0","Type":"ContainerDied","Data":"c08ee816e2d56a9e73dee70f0f0dd364a6e9cac0f18755929529e40e60842b6e"} Jan 21 13:24:22 crc kubenswrapper[4765]: I0121 13:24:22.941505 4765 generic.go:334] "Generic (PLEG): container finished" podID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerID="c319a7e069c7ff590c53b78a4a3b85dba69e27fc7d429354222090f637cef19a" exitCode=0 Jan 21 13:24:22 crc kubenswrapper[4765]: I0121 13:24:22.941551 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06aadd94-84cf-40f0-887f-24cadd8876d0","Type":"ContainerDied","Data":"c319a7e069c7ff590c53b78a4a3b85dba69e27fc7d429354222090f637cef19a"} Jan 21 13:24:23 crc kubenswrapper[4765]: I0121 13:24:23.954552 4765 generic.go:334] "Generic (PLEG): container finished" podID="074ae613-bc7f-4443-abdb-7010b6054997" containerID="aa436e74a6fd1c1c3a4ed7348015c8f931d8a51210c3f7b94c4c01885524ce52" exitCode=137 Jan 21 13:24:23 crc kubenswrapper[4765]: I0121 13:24:23.954607 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6558674dbd-lct5s" event={"ID":"074ae613-bc7f-4443-abdb-7010b6054997","Type":"ContainerDied","Data":"aa436e74a6fd1c1c3a4ed7348015c8f931d8a51210c3f7b94c4c01885524ce52"} Jan 21 13:24:23 crc kubenswrapper[4765]: I0121 13:24:23.958304 4765 generic.go:334] "Generic (PLEG): container finished" podID="1241b1f0-34c1-401a-b91f-13b72926cc2c" containerID="46f1a7c9396eca5402ea7a2319db77d5ead07a4127c2f33dffbb8adc136e01da" exitCode=137 Jan 21 13:24:23 crc kubenswrapper[4765]: I0121 13:24:23.958341 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c57777f6-gqpgv" event={"ID":"1241b1f0-34c1-401a-b91f-13b72926cc2c","Type":"ContainerDied","Data":"46f1a7c9396eca5402ea7a2319db77d5ead07a4127c2f33dffbb8adc136e01da"} Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.610615 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dgs4w"] Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.611860 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dgs4w" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.646927 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dgs4w"] Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.690971 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-lq24p"] Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.692582 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lq24p" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.705889 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lq24p"] Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.730042 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bx47\" (UniqueName: \"kubernetes.io/projected/2291fe12-d21d-4050-9296-40984ce36fd3-kube-api-access-7bx47\") pod \"nova-api-db-create-dgs4w\" (UID: \"2291fe12-d21d-4050-9296-40984ce36fd3\") " pod="openstack/nova-api-db-create-dgs4w" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.730174 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2291fe12-d21d-4050-9296-40984ce36fd3-operator-scripts\") pod \"nova-api-db-create-dgs4w\" (UID: \"2291fe12-d21d-4050-9296-40984ce36fd3\") " pod="openstack/nova-api-db-create-dgs4w" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.807059 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6aaf-account-create-update-xmbm9"] Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.814488 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6aaf-account-create-update-xmbm9" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.822882 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.846475 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bx47\" (UniqueName: \"kubernetes.io/projected/2291fe12-d21d-4050-9296-40984ce36fd3-kube-api-access-7bx47\") pod \"nova-api-db-create-dgs4w\" (UID: \"2291fe12-d21d-4050-9296-40984ce36fd3\") " pod="openstack/nova-api-db-create-dgs4w" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.847386 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqpzv\" (UniqueName: \"kubernetes.io/projected/62802099-90dc-4ca1-b480-5dd33b03a17d-kube-api-access-mqpzv\") pod \"nova-cell0-db-create-lq24p\" (UID: \"62802099-90dc-4ca1-b480-5dd33b03a17d\") " pod="openstack/nova-cell0-db-create-lq24p" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.847563 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2291fe12-d21d-4050-9296-40984ce36fd3-operator-scripts\") pod \"nova-api-db-create-dgs4w\" (UID: \"2291fe12-d21d-4050-9296-40984ce36fd3\") " pod="openstack/nova-api-db-create-dgs4w" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.847723 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62802099-90dc-4ca1-b480-5dd33b03a17d-operator-scripts\") pod \"nova-cell0-db-create-lq24p\" (UID: \"62802099-90dc-4ca1-b480-5dd33b03a17d\") " pod="openstack/nova-cell0-db-create-lq24p" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.858680 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6aaf-account-create-update-xmbm9"] Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.859263 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2291fe12-d21d-4050-9296-40984ce36fd3-operator-scripts\") pod \"nova-api-db-create-dgs4w\" (UID: \"2291fe12-d21d-4050-9296-40984ce36fd3\") " pod="openstack/nova-api-db-create-dgs4w" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.873824 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bx47\" (UniqueName: \"kubernetes.io/projected/2291fe12-d21d-4050-9296-40984ce36fd3-kube-api-access-7bx47\") pod \"nova-api-db-create-dgs4w\" (UID: \"2291fe12-d21d-4050-9296-40984ce36fd3\") " pod="openstack/nova-api-db-create-dgs4w" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.934599 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dgs4w" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.950458 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqpzv\" (UniqueName: \"kubernetes.io/projected/62802099-90dc-4ca1-b480-5dd33b03a17d-kube-api-access-mqpzv\") pod \"nova-cell0-db-create-lq24p\" (UID: \"62802099-90dc-4ca1-b480-5dd33b03a17d\") " pod="openstack/nova-cell0-db-create-lq24p" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.950540 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxdkq\" (UniqueName: \"kubernetes.io/projected/d1d066a7-4634-4680-84b3-f5bb40d939f3-kube-api-access-jxdkq\") pod \"nova-api-6aaf-account-create-update-xmbm9\" (UID: \"d1d066a7-4634-4680-84b3-f5bb40d939f3\") " pod="openstack/nova-api-6aaf-account-create-update-xmbm9" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.950617 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1d066a7-4634-4680-84b3-f5bb40d939f3-operator-scripts\") pod \"nova-api-6aaf-account-create-update-xmbm9\" (UID: \"d1d066a7-4634-4680-84b3-f5bb40d939f3\") " pod="openstack/nova-api-6aaf-account-create-update-xmbm9" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.950644 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62802099-90dc-4ca1-b480-5dd33b03a17d-operator-scripts\") pod \"nova-cell0-db-create-lq24p\" (UID: \"62802099-90dc-4ca1-b480-5dd33b03a17d\") " pod="openstack/nova-cell0-db-create-lq24p" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.951394 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62802099-90dc-4ca1-b480-5dd33b03a17d-operator-scripts\") pod \"nova-cell0-db-create-lq24p\" (UID: \"62802099-90dc-4ca1-b480-5dd33b03a17d\") " pod="openstack/nova-cell0-db-create-lq24p" Jan 21 13:24:25 crc kubenswrapper[4765]: I0121 13:24:25.995338 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqpzv\" (UniqueName: \"kubernetes.io/projected/62802099-90dc-4ca1-b480-5dd33b03a17d-kube-api-access-mqpzv\") pod \"nova-cell0-db-create-lq24p\" (UID: \"62802099-90dc-4ca1-b480-5dd33b03a17d\") " pod="openstack/nova-cell0-db-create-lq24p" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.011601 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lq24p" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.035957 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.036315 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="677ee428-97c3-4ee7-a68b-8eb406f5734c" containerName="glance-log" containerID="cri-o://b5f9ee99286880921762bb7f873930727cc8a13b80ef748b8ec83ef3e479a8b9" gracePeriod=30 Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.036921 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="677ee428-97c3-4ee7-a68b-8eb406f5734c" containerName="glance-httpd" containerID="cri-o://16422fa6c406388ec2153cf2b8e8959c82de1c40e29b88eca0a98c498f8c8d3c" gracePeriod=30 Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.049232 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-v7jnv"] Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.050463 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v7jnv" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.052419 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxdkq\" (UniqueName: \"kubernetes.io/projected/d1d066a7-4634-4680-84b3-f5bb40d939f3-kube-api-access-jxdkq\") pod \"nova-api-6aaf-account-create-update-xmbm9\" (UID: \"d1d066a7-4634-4680-84b3-f5bb40d939f3\") " pod="openstack/nova-api-6aaf-account-create-update-xmbm9" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.052687 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1d066a7-4634-4680-84b3-f5bb40d939f3-operator-scripts\") pod \"nova-api-6aaf-account-create-update-xmbm9\" (UID: \"d1d066a7-4634-4680-84b3-f5bb40d939f3\") " pod="openstack/nova-api-6aaf-account-create-update-xmbm9" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.053477 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1d066a7-4634-4680-84b3-f5bb40d939f3-operator-scripts\") pod \"nova-api-6aaf-account-create-update-xmbm9\" (UID: \"d1d066a7-4634-4680-84b3-f5bb40d939f3\") " pod="openstack/nova-api-6aaf-account-create-update-xmbm9" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.082080 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-444e-account-create-update-4qk7x"] Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.086653 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-444e-account-create-update-4qk7x" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.093749 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.117985 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxdkq\" (UniqueName: \"kubernetes.io/projected/d1d066a7-4634-4680-84b3-f5bb40d939f3-kube-api-access-jxdkq\") pod \"nova-api-6aaf-account-create-update-xmbm9\" (UID: \"d1d066a7-4634-4680-84b3-f5bb40d939f3\") " pod="openstack/nova-api-6aaf-account-create-update-xmbm9" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.154848 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-v7jnv"] Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.156261 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c-operator-scripts\") pod \"nova-cell1-db-create-v7jnv\" (UID: \"e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c\") " pod="openstack/nova-cell1-db-create-v7jnv" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.156410 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsmvk\" (UniqueName: \"kubernetes.io/projected/e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c-kube-api-access-tsmvk\") pod \"nova-cell1-db-create-v7jnv\" (UID: \"e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c\") " pod="openstack/nova-cell1-db-create-v7jnv" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.156867 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6aaf-account-create-update-xmbm9" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.188053 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-444e-account-create-update-4qk7x"] Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.257750 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d203539-6d7d-4db6-803f-c1954d20a55f-operator-scripts\") pod \"nova-cell0-444e-account-create-update-4qk7x\" (UID: \"7d203539-6d7d-4db6-803f-c1954d20a55f\") " pod="openstack/nova-cell0-444e-account-create-update-4qk7x" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.257830 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsmvk\" (UniqueName: \"kubernetes.io/projected/e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c-kube-api-access-tsmvk\") pod \"nova-cell1-db-create-v7jnv\" (UID: \"e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c\") " pod="openstack/nova-cell1-db-create-v7jnv" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.257921 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8kgd\" (UniqueName: \"kubernetes.io/projected/7d203539-6d7d-4db6-803f-c1954d20a55f-kube-api-access-t8kgd\") pod \"nova-cell0-444e-account-create-update-4qk7x\" (UID: \"7d203539-6d7d-4db6-803f-c1954d20a55f\") " pod="openstack/nova-cell0-444e-account-create-update-4qk7x" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.257988 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c-operator-scripts\") pod \"nova-cell1-db-create-v7jnv\" (UID: \"e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c\") " pod="openstack/nova-cell1-db-create-v7jnv" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.258716 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c-operator-scripts\") pod \"nova-cell1-db-create-v7jnv\" (UID: \"e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c\") " pod="openstack/nova-cell1-db-create-v7jnv" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.287802 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsmvk\" (UniqueName: \"kubernetes.io/projected/e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c-kube-api-access-tsmvk\") pod \"nova-cell1-db-create-v7jnv\" (UID: \"e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c\") " pod="openstack/nova-cell1-db-create-v7jnv" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.325734 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5135-account-create-update-drt8l"] Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.330928 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5135-account-create-update-drt8l" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.333601 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.344730 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5135-account-create-update-drt8l"] Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.359566 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d203539-6d7d-4db6-803f-c1954d20a55f-operator-scripts\") pod \"nova-cell0-444e-account-create-update-4qk7x\" (UID: \"7d203539-6d7d-4db6-803f-c1954d20a55f\") " pod="openstack/nova-cell0-444e-account-create-update-4qk7x" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.359755 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8kgd\" (UniqueName: \"kubernetes.io/projected/7d203539-6d7d-4db6-803f-c1954d20a55f-kube-api-access-t8kgd\") pod \"nova-cell0-444e-account-create-update-4qk7x\" (UID: \"7d203539-6d7d-4db6-803f-c1954d20a55f\") " pod="openstack/nova-cell0-444e-account-create-update-4qk7x" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.360861 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d203539-6d7d-4db6-803f-c1954d20a55f-operator-scripts\") pod \"nova-cell0-444e-account-create-update-4qk7x\" (UID: \"7d203539-6d7d-4db6-803f-c1954d20a55f\") " pod="openstack/nova-cell0-444e-account-create-update-4qk7x" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.383487 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v7jnv" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.427173 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8kgd\" (UniqueName: \"kubernetes.io/projected/7d203539-6d7d-4db6-803f-c1954d20a55f-kube-api-access-t8kgd\") pod \"nova-cell0-444e-account-create-update-4qk7x\" (UID: \"7d203539-6d7d-4db6-803f-c1954d20a55f\") " pod="openstack/nova-cell0-444e-account-create-update-4qk7x" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.470483 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hln7m\" (UniqueName: \"kubernetes.io/projected/299980e5-044a-4ee7-a28d-b11babd43597-kube-api-access-hln7m\") pod \"nova-cell1-5135-account-create-update-drt8l\" (UID: \"299980e5-044a-4ee7-a28d-b11babd43597\") " pod="openstack/nova-cell1-5135-account-create-update-drt8l" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.470585 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/299980e5-044a-4ee7-a28d-b11babd43597-operator-scripts\") pod \"nova-cell1-5135-account-create-update-drt8l\" (UID: \"299980e5-044a-4ee7-a28d-b11babd43597\") " pod="openstack/nova-cell1-5135-account-create-update-drt8l" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.478290 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-444e-account-create-update-4qk7x" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.572489 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hln7m\" (UniqueName: \"kubernetes.io/projected/299980e5-044a-4ee7-a28d-b11babd43597-kube-api-access-hln7m\") pod \"nova-cell1-5135-account-create-update-drt8l\" (UID: \"299980e5-044a-4ee7-a28d-b11babd43597\") " pod="openstack/nova-cell1-5135-account-create-update-drt8l" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.572566 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/299980e5-044a-4ee7-a28d-b11babd43597-operator-scripts\") pod \"nova-cell1-5135-account-create-update-drt8l\" (UID: \"299980e5-044a-4ee7-a28d-b11babd43597\") " pod="openstack/nova-cell1-5135-account-create-update-drt8l" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.573428 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/299980e5-044a-4ee7-a28d-b11babd43597-operator-scripts\") pod \"nova-cell1-5135-account-create-update-drt8l\" (UID: \"299980e5-044a-4ee7-a28d-b11babd43597\") " pod="openstack/nova-cell1-5135-account-create-update-drt8l" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.594807 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hln7m\" (UniqueName: \"kubernetes.io/projected/299980e5-044a-4ee7-a28d-b11babd43597-kube-api-access-hln7m\") pod \"nova-cell1-5135-account-create-update-drt8l\" (UID: \"299980e5-044a-4ee7-a28d-b11babd43597\") " pod="openstack/nova-cell1-5135-account-create-update-drt8l" Jan 21 13:24:26 crc kubenswrapper[4765]: I0121 13:24:26.684635 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5135-account-create-update-drt8l" Jan 21 13:24:27 crc kubenswrapper[4765]: I0121 13:24:27.013395 4765 generic.go:334] "Generic (PLEG): container finished" podID="677ee428-97c3-4ee7-a68b-8eb406f5734c" containerID="b5f9ee99286880921762bb7f873930727cc8a13b80ef748b8ec83ef3e479a8b9" exitCode=143 Jan 21 13:24:27 crc kubenswrapper[4765]: I0121 13:24:27.013443 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"677ee428-97c3-4ee7-a68b-8eb406f5734c","Type":"ContainerDied","Data":"b5f9ee99286880921762bb7f873930727cc8a13b80ef748b8ec83ef3e479a8b9"} Jan 21 13:24:27 crc kubenswrapper[4765]: I0121 13:24:27.471935 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 13:24:27 crc kubenswrapper[4765]: I0121 13:24:27.472410 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4233242d-f981-4e9c-b8d0-0ea546d328c3" containerName="glance-log" containerID="cri-o://575d61fed73b271b7bc2060c0acb686a2fc3e396b6c9ea37e6ed4335c84091a0" gracePeriod=30 Jan 21 13:24:27 crc kubenswrapper[4765]: I0121 13:24:27.472540 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4233242d-f981-4e9c-b8d0-0ea546d328c3" containerName="glance-httpd" containerID="cri-o://1a3a1b538e1d9a2858b08c392c47ea3cd1b8949624df5522dffd4885d438a96e" gracePeriod=30 Jan 21 13:24:28 crc kubenswrapper[4765]: I0121 13:24:28.027012 4765 generic.go:334] "Generic (PLEG): container finished" podID="4233242d-f981-4e9c-b8d0-0ea546d328c3" containerID="575d61fed73b271b7bc2060c0acb686a2fc3e396b6c9ea37e6ed4335c84091a0" exitCode=143 Jan 21 13:24:28 crc kubenswrapper[4765]: I0121 13:24:28.027077 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4233242d-f981-4e9c-b8d0-0ea546d328c3","Type":"ContainerDied","Data":"575d61fed73b271b7bc2060c0acb686a2fc3e396b6c9ea37e6ed4335c84091a0"} Jan 21 13:24:28 crc kubenswrapper[4765]: I0121 13:24:28.915458 4765 scope.go:117] "RemoveContainer" containerID="a434d50e63d966f15f3f9d12e1901082a58ba411e27069d64143fd32ed4f676d" Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.447323 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.572387 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06aadd94-84cf-40f0-887f-24cadd8876d0-log-httpd\") pod \"06aadd94-84cf-40f0-887f-24cadd8876d0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.572467 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-combined-ca-bundle\") pod \"06aadd94-84cf-40f0-887f-24cadd8876d0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.572519 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff2f6\" (UniqueName: \"kubernetes.io/projected/06aadd94-84cf-40f0-887f-24cadd8876d0-kube-api-access-ff2f6\") pod \"06aadd94-84cf-40f0-887f-24cadd8876d0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.572546 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06aadd94-84cf-40f0-887f-24cadd8876d0-run-httpd\") pod \"06aadd94-84cf-40f0-887f-24cadd8876d0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.572581 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-config-data\") pod \"06aadd94-84cf-40f0-887f-24cadd8876d0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.572627 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-scripts\") pod \"06aadd94-84cf-40f0-887f-24cadd8876d0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.572693 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-sg-core-conf-yaml\") pod \"06aadd94-84cf-40f0-887f-24cadd8876d0\" (UID: \"06aadd94-84cf-40f0-887f-24cadd8876d0\") " Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.577892 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06aadd94-84cf-40f0-887f-24cadd8876d0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "06aadd94-84cf-40f0-887f-24cadd8876d0" (UID: "06aadd94-84cf-40f0-887f-24cadd8876d0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.619717 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06aadd94-84cf-40f0-887f-24cadd8876d0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "06aadd94-84cf-40f0-887f-24cadd8876d0" (UID: "06aadd94-84cf-40f0-887f-24cadd8876d0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.658482 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06aadd94-84cf-40f0-887f-24cadd8876d0-kube-api-access-ff2f6" (OuterVolumeSpecName: "kube-api-access-ff2f6") pod "06aadd94-84cf-40f0-887f-24cadd8876d0" (UID: "06aadd94-84cf-40f0-887f-24cadd8876d0"). InnerVolumeSpecName "kube-api-access-ff2f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.662047 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-scripts" (OuterVolumeSpecName: "scripts") pod "06aadd94-84cf-40f0-887f-24cadd8876d0" (UID: "06aadd94-84cf-40f0-887f-24cadd8876d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.676677 4765 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06aadd94-84cf-40f0-887f-24cadd8876d0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.676729 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff2f6\" (UniqueName: \"kubernetes.io/projected/06aadd94-84cf-40f0-887f-24cadd8876d0-kube-api-access-ff2f6\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.676745 4765 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/06aadd94-84cf-40f0-887f-24cadd8876d0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.676756 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.731404 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "06aadd94-84cf-40f0-887f-24cadd8876d0" (UID: "06aadd94-84cf-40f0-887f-24cadd8876d0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.816316 4765 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.918955 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="677ee428-97c3-4ee7-a68b-8eb406f5734c" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:54994->10.217.0.152:9292: read: connection reset by peer" Jan 21 13:24:29 crc kubenswrapper[4765]: I0121 13:24:29.919026 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="677ee428-97c3-4ee7-a68b-8eb406f5734c" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:55008->10.217.0.152:9292: read: connection reset by peer" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.129286 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.129341 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"06aadd94-84cf-40f0-887f-24cadd8876d0","Type":"ContainerDied","Data":"94936eb38b430632dcd246291c9edc39c9bffb043a8c53a63a0f5c56a7aa684e"} Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.129435 4765 scope.go:117] "RemoveContainer" containerID="421ba147a644aefbaddaf933ee3e555acaa53dca4b782df36d1e699b8d6b0839" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.142716 4765 generic.go:334] "Generic (PLEG): container finished" podID="677ee428-97c3-4ee7-a68b-8eb406f5734c" containerID="16422fa6c406388ec2153cf2b8e8959c82de1c40e29b88eca0a98c498f8c8d3c" exitCode=0 Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.143452 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"677ee428-97c3-4ee7-a68b-8eb406f5734c","Type":"ContainerDied","Data":"16422fa6c406388ec2153cf2b8e8959c82de1c40e29b88eca0a98c498f8c8d3c"} Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.266880 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5135-account-create-update-drt8l"] Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.389971 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dgs4w"] Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.450544 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06aadd94-84cf-40f0-887f-24cadd8876d0" (UID: "06aadd94-84cf-40f0-887f-24cadd8876d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.451417 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-config-data" (OuterVolumeSpecName: "config-data") pod "06aadd94-84cf-40f0-887f-24cadd8876d0" (UID: "06aadd94-84cf-40f0-887f-24cadd8876d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.536454 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.536497 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06aadd94-84cf-40f0-887f-24cadd8876d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.629314 4765 scope.go:117] "RemoveContainer" containerID="a1d8dd1c4966a38df6f9bf6da440ea459aaa75a29b38dd97b8dc225e649c5073" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.691852 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="4233242d-f981-4e9c-b8d0-0ea546d328c3" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": read tcp 10.217.0.2:42844->10.217.0.151:9292: read: connection reset by peer" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.695329 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="4233242d-f981-4e9c-b8d0-0ea546d328c3" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": read tcp 10.217.0.2:42832->10.217.0.151:9292: read: connection reset by peer" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.728792 4765 scope.go:117] "RemoveContainer" containerID="c319a7e069c7ff590c53b78a4a3b85dba69e27fc7d429354222090f637cef19a" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.884409 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.896290 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.918747 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:30 crc kubenswrapper[4765]: E0121 13:24:30.921738 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="ceilometer-notification-agent" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.921758 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="ceilometer-notification-agent" Jan 21 13:24:30 crc kubenswrapper[4765]: E0121 13:24:30.921780 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="proxy-httpd" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.921786 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="proxy-httpd" Jan 21 13:24:30 crc kubenswrapper[4765]: E0121 13:24:30.921816 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="ceilometer-central-agent" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.921824 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="ceilometer-central-agent" Jan 21 13:24:30 crc kubenswrapper[4765]: E0121 13:24:30.921836 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="sg-core" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.921843 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="sg-core" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.922052 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="ceilometer-notification-agent" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.922075 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="proxy-httpd" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.922086 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="ceilometer-central-agent" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.922095 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" containerName="sg-core" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.924249 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.938603 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.938760 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.939384 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.962402 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.962583 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.962724 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-config-data\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.962879 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66125986-9e2e-4609-bf04-d486e27bc800-log-httpd\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.962971 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66125986-9e2e-4609-bf04-d486e27bc800-run-httpd\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.963066 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-scripts\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.963150 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmvkp\" (UniqueName: \"kubernetes.io/projected/66125986-9e2e-4609-bf04-d486e27bc800-kube-api-access-lmvkp\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:30 crc kubenswrapper[4765]: I0121 13:24:30.963728 4765 scope.go:117] "RemoveContainer" containerID="c08ee816e2d56a9e73dee70f0f0dd364a6e9cac0f18755929529e40e60842b6e" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.073996 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.074069 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.074105 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-config-data\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.074145 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66125986-9e2e-4609-bf04-d486e27bc800-log-httpd\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.074167 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66125986-9e2e-4609-bf04-d486e27bc800-run-httpd\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.074200 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-scripts\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.074246 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmvkp\" (UniqueName: \"kubernetes.io/projected/66125986-9e2e-4609-bf04-d486e27bc800-kube-api-access-lmvkp\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.076886 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66125986-9e2e-4609-bf04-d486e27bc800-log-httpd\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.081140 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66125986-9e2e-4609-bf04-d486e27bc800-run-httpd\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.086228 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.088885 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-config-data\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.091100 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.095072 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.103540 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmvkp\" (UniqueName: \"kubernetes.io/projected/66125986-9e2e-4609-bf04-d486e27bc800-kube-api-access-lmvkp\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.104877 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-v7jnv"] Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.108038 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-scripts\") pod \"ceilometer-0\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " pod="openstack/ceilometer-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.133530 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lq24p"] Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.150263 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6aaf-account-create-update-xmbm9"] Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.157034 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-444e-account-create-update-4qk7x"] Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.170434 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.177196 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-config-data\") pod \"677ee428-97c3-4ee7-a68b-8eb406f5734c\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.177462 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-combined-ca-bundle\") pod \"677ee428-97c3-4ee7-a68b-8eb406f5734c\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.177648 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-public-tls-certs\") pod \"677ee428-97c3-4ee7-a68b-8eb406f5734c\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.177757 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/677ee428-97c3-4ee7-a68b-8eb406f5734c-logs\") pod \"677ee428-97c3-4ee7-a68b-8eb406f5734c\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.178055 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-scripts\") pod \"677ee428-97c3-4ee7-a68b-8eb406f5734c\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.178173 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/677ee428-97c3-4ee7-a68b-8eb406f5734c-httpd-run\") pod \"677ee428-97c3-4ee7-a68b-8eb406f5734c\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.178390 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mvjm\" (UniqueName: \"kubernetes.io/projected/677ee428-97c3-4ee7-a68b-8eb406f5734c-kube-api-access-7mvjm\") pod \"677ee428-97c3-4ee7-a68b-8eb406f5734c\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.178510 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"677ee428-97c3-4ee7-a68b-8eb406f5734c\" (UID: \"677ee428-97c3-4ee7-a68b-8eb406f5734c\") " Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.184910 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/677ee428-97c3-4ee7-a68b-8eb406f5734c-logs" (OuterVolumeSpecName: "logs") pod "677ee428-97c3-4ee7-a68b-8eb406f5734c" (UID: "677ee428-97c3-4ee7-a68b-8eb406f5734c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.185597 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/677ee428-97c3-4ee7-a68b-8eb406f5734c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "677ee428-97c3-4ee7-a68b-8eb406f5734c" (UID: "677ee428-97c3-4ee7-a68b-8eb406f5734c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.194828 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/677ee428-97c3-4ee7-a68b-8eb406f5734c-kube-api-access-7mvjm" (OuterVolumeSpecName: "kube-api-access-7mvjm") pod "677ee428-97c3-4ee7-a68b-8eb406f5734c" (UID: "677ee428-97c3-4ee7-a68b-8eb406f5734c"). InnerVolumeSpecName "kube-api-access-7mvjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.199105 4765 generic.go:334] "Generic (PLEG): container finished" podID="4233242d-f981-4e9c-b8d0-0ea546d328c3" containerID="1a3a1b538e1d9a2858b08c392c47ea3cd1b8949624df5522dffd4885d438a96e" exitCode=0 Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.199276 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4233242d-f981-4e9c-b8d0-0ea546d328c3","Type":"ContainerDied","Data":"1a3a1b538e1d9a2858b08c392c47ea3cd1b8949624df5522dffd4885d438a96e"} Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.199873 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-scripts" (OuterVolumeSpecName: "scripts") pod "677ee428-97c3-4ee7-a68b-8eb406f5734c" (UID: "677ee428-97c3-4ee7-a68b-8eb406f5734c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.207427 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "677ee428-97c3-4ee7-a68b-8eb406f5734c" (UID: "677ee428-97c3-4ee7-a68b-8eb406f5734c"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.220916 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"677ee428-97c3-4ee7-a68b-8eb406f5734c","Type":"ContainerDied","Data":"fe17f4df94f2f887a7d85e2418b552b10689688148162c3ddc05ad0c2cb84da3"} Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.220973 4765 scope.go:117] "RemoveContainer" containerID="16422fa6c406388ec2153cf2b8e8959c82de1c40e29b88eca0a98c498f8c8d3c" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.221082 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.251386 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "677ee428-97c3-4ee7-a68b-8eb406f5734c" (UID: "677ee428-97c3-4ee7-a68b-8eb406f5734c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.264964 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c57777f6-gqpgv" event={"ID":"1241b1f0-34c1-401a-b91f-13b72926cc2c","Type":"ContainerStarted","Data":"78e055d27064852c7be2cdb43ad8f3d3122cb6da672a31461fcc48a6a005bc48"} Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.281570 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mvjm\" (UniqueName: \"kubernetes.io/projected/677ee428-97c3-4ee7-a68b-8eb406f5734c-kube-api-access-7mvjm\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.281625 4765 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.281644 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.281657 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/677ee428-97c3-4ee7-a68b-8eb406f5734c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.281670 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.281681 4765 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/677ee428-97c3-4ee7-a68b-8eb406f5734c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.309619 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"344fdbd2-c402-42e4-83d5-7e0bb3b978f6","Type":"ContainerStarted","Data":"58d2faa6c3807817e1abd3c314f4e6302abf65621e7249d06451a106a49f4037"} Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.339071 4765 scope.go:117] "RemoveContainer" containerID="b5f9ee99286880921762bb7f873930727cc8a13b80ef748b8ec83ef3e479a8b9" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.386085 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.405518 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "677ee428-97c3-4ee7-a68b-8eb406f5734c" (UID: "677ee428-97c3-4ee7-a68b-8eb406f5734c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.424333 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6558674dbd-lct5s" event={"ID":"074ae613-bc7f-4443-abdb-7010b6054997","Type":"ContainerStarted","Data":"e031dd893b547965535c1708b7e364ac4020188df01de94c0db0612a266dcb98"} Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.440486 4765 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.441344 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5135-account-create-update-drt8l" event={"ID":"299980e5-044a-4ee7-a28d-b11babd43597","Type":"ContainerStarted","Data":"87c8805eaf388ddecbd04f74faeb5b5c293e75054835dd01cf0a21f3f6fe2adf"} Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.441388 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5135-account-create-update-drt8l" event={"ID":"299980e5-044a-4ee7-a28d-b11babd43597","Type":"ContainerStarted","Data":"56eed51f509a71e6e8c6bd621cafba5a12ae6c6a2c78a30bc48c9bb90b88a397"} Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.441407 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.854062266 podStartE2EDuration="20.438739825s" podCreationTimestamp="2026-01-21 13:24:11 +0000 UTC" firstStartedPulling="2026-01-21 13:24:12.429913591 +0000 UTC m=+1313.447639413" lastFinishedPulling="2026-01-21 13:24:29.01459115 +0000 UTC m=+1330.032316972" observedRunningTime="2026-01-21 13:24:31.386618645 +0000 UTC m=+1332.404344497" watchObservedRunningTime="2026-01-21 13:24:31.438739825 +0000 UTC m=+1332.456465647" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.444715 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dgs4w" event={"ID":"2291fe12-d21d-4050-9296-40984ce36fd3","Type":"ContainerStarted","Data":"d4afaa9160d8ad23fea28b505c38684dda32046b259c43ce3f46c093ae8fa356"} Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.444766 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dgs4w" event={"ID":"2291fe12-d21d-4050-9296-40984ce36fd3","Type":"ContainerStarted","Data":"ad2b93d7aa5e770ed1b374d15ebb6aaa4a40c92cf49f19c0c019a0d3e01526b2"} Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.480845 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-config-data" (OuterVolumeSpecName: "config-data") pod "677ee428-97c3-4ee7-a68b-8eb406f5734c" (UID: "677ee428-97c3-4ee7-a68b-8eb406f5734c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.501974 4765 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.502096 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.502118 4765 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/677ee428-97c3-4ee7-a68b-8eb406f5734c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.504152 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-c67b7f46c-vdfh2" event={"ID":"dcc230e6-cf6d-4fc2-bea2-9ba2b028716b","Type":"ContainerStarted","Data":"2a156acb38e8d8729f3b3f21b5cdb5435ee4d7779f09f0f19d4f7136c97d3415"} Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.504587 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.504613 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.564243 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-dgs4w" podStartSLOduration=6.564223835 podStartE2EDuration="6.564223835s" podCreationTimestamp="2026-01-21 13:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:24:31.535473446 +0000 UTC m=+1332.553199268" watchObservedRunningTime="2026-01-21 13:24:31.564223835 +0000 UTC m=+1332.581949657" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.577612 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-c67b7f46c-vdfh2" podUID="dcc230e6-cf6d-4fc2-bea2-9ba2b028716b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.667671 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-5135-account-create-update-drt8l" podStartSLOduration=5.667648322 podStartE2EDuration="5.667648322s" podCreationTimestamp="2026-01-21 13:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:24:31.558426876 +0000 UTC m=+1332.576152698" watchObservedRunningTime="2026-01-21 13:24:31.667648322 +0000 UTC m=+1332.685374144" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.695020 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06aadd94-84cf-40f0-887f-24cadd8876d0" path="/var/lib/kubelet/pods/06aadd94-84cf-40f0-887f-24cadd8876d0/volumes" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.709292 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-c67b7f46c-vdfh2" podStartSLOduration=14.709270855 podStartE2EDuration="14.709270855s" podCreationTimestamp="2026-01-21 13:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:24:31.583837187 +0000 UTC m=+1332.601562999" watchObservedRunningTime="2026-01-21 13:24:31.709270855 +0000 UTC m=+1332.726996677" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.740558 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.749043 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.812263 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 13:24:31 crc kubenswrapper[4765]: E0121 13:24:31.812667 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677ee428-97c3-4ee7-a68b-8eb406f5734c" containerName="glance-httpd" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.812678 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="677ee428-97c3-4ee7-a68b-8eb406f5734c" containerName="glance-httpd" Jan 21 13:24:31 crc kubenswrapper[4765]: E0121 13:24:31.812701 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677ee428-97c3-4ee7-a68b-8eb406f5734c" containerName="glance-log" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.812707 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="677ee428-97c3-4ee7-a68b-8eb406f5734c" containerName="glance-log" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.812870 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="677ee428-97c3-4ee7-a68b-8eb406f5734c" containerName="glance-httpd" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.812900 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="677ee428-97c3-4ee7-a68b-8eb406f5734c" containerName="glance-log" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.813859 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.816195 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.817360 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.850949 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.919717 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/165f5e89-08b4-465c-acc6-52d76f9c0db0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.920055 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.920189 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165f5e89-08b4-465c-acc6-52d76f9c0db0-config-data\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.920384 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/165f5e89-08b4-465c-acc6-52d76f9c0db0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.920525 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/165f5e89-08b4-465c-acc6-52d76f9c0db0-scripts\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.920708 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165f5e89-08b4-465c-acc6-52d76f9c0db0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.920837 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165f5e89-08b4-465c-acc6-52d76f9c0db0-logs\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:31 crc kubenswrapper[4765]: I0121 13:24:31.921117 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sghc6\" (UniqueName: \"kubernetes.io/projected/165f5e89-08b4-465c-acc6-52d76f9c0db0-kube-api-access-sghc6\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.023890 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sghc6\" (UniqueName: \"kubernetes.io/projected/165f5e89-08b4-465c-acc6-52d76f9c0db0-kube-api-access-sghc6\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.024504 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/165f5e89-08b4-465c-acc6-52d76f9c0db0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.024693 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.024742 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165f5e89-08b4-465c-acc6-52d76f9c0db0-config-data\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.024801 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/165f5e89-08b4-465c-acc6-52d76f9c0db0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.024865 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/165f5e89-08b4-465c-acc6-52d76f9c0db0-scripts\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.024957 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165f5e89-08b4-465c-acc6-52d76f9c0db0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.025012 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165f5e89-08b4-465c-acc6-52d76f9c0db0-logs\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.025344 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/165f5e89-08b4-465c-acc6-52d76f9c0db0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.025776 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165f5e89-08b4-465c-acc6-52d76f9c0db0-logs\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.026119 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.032611 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/165f5e89-08b4-465c-acc6-52d76f9c0db0-scripts\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.033294 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165f5e89-08b4-465c-acc6-52d76f9c0db0-config-data\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.034157 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/165f5e89-08b4-465c-acc6-52d76f9c0db0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.050199 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.051872 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sghc6\" (UniqueName: \"kubernetes.io/projected/165f5e89-08b4-465c-acc6-52d76f9c0db0-kube-api-access-sghc6\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.065709 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165f5e89-08b4-465c-acc6-52d76f9c0db0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.126087 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-combined-ca-bundle\") pod \"4233242d-f981-4e9c-b8d0-0ea546d328c3\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.126235 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-internal-tls-certs\") pod \"4233242d-f981-4e9c-b8d0-0ea546d328c3\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.126266 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4233242d-f981-4e9c-b8d0-0ea546d328c3-logs\") pod \"4233242d-f981-4e9c-b8d0-0ea546d328c3\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.126870 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4233242d-f981-4e9c-b8d0-0ea546d328c3-logs" (OuterVolumeSpecName: "logs") pod "4233242d-f981-4e9c-b8d0-0ea546d328c3" (UID: "4233242d-f981-4e9c-b8d0-0ea546d328c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.128794 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"165f5e89-08b4-465c-acc6-52d76f9c0db0\") " pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.129055 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"4233242d-f981-4e9c-b8d0-0ea546d328c3\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.129154 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-config-data\") pod \"4233242d-f981-4e9c-b8d0-0ea546d328c3\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.129189 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4233242d-f981-4e9c-b8d0-0ea546d328c3-httpd-run\") pod \"4233242d-f981-4e9c-b8d0-0ea546d328c3\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.129312 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-scripts\") pod \"4233242d-f981-4e9c-b8d0-0ea546d328c3\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.129379 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm66f\" (UniqueName: \"kubernetes.io/projected/4233242d-f981-4e9c-b8d0-0ea546d328c3-kube-api-access-sm66f\") pod \"4233242d-f981-4e9c-b8d0-0ea546d328c3\" (UID: \"4233242d-f981-4e9c-b8d0-0ea546d328c3\") " Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.130199 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4233242d-f981-4e9c-b8d0-0ea546d328c3-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.134482 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4233242d-f981-4e9c-b8d0-0ea546d328c3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4233242d-f981-4e9c-b8d0-0ea546d328c3" (UID: "4233242d-f981-4e9c-b8d0-0ea546d328c3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.145535 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4233242d-f981-4e9c-b8d0-0ea546d328c3-kube-api-access-sm66f" (OuterVolumeSpecName: "kube-api-access-sm66f") pod "4233242d-f981-4e9c-b8d0-0ea546d328c3" (UID: "4233242d-f981-4e9c-b8d0-0ea546d328c3"). InnerVolumeSpecName "kube-api-access-sm66f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.152198 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-scripts" (OuterVolumeSpecName: "scripts") pod "4233242d-f981-4e9c-b8d0-0ea546d328c3" (UID: "4233242d-f981-4e9c-b8d0-0ea546d328c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.152633 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "4233242d-f981-4e9c-b8d0-0ea546d328c3" (UID: "4233242d-f981-4e9c-b8d0-0ea546d328c3"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.221736 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4233242d-f981-4e9c-b8d0-0ea546d328c3" (UID: "4233242d-f981-4e9c-b8d0-0ea546d328c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.233734 4765 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4233242d-f981-4e9c-b8d0-0ea546d328c3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.233779 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.233790 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm66f\" (UniqueName: \"kubernetes.io/projected/4233242d-f981-4e9c-b8d0-0ea546d328c3-kube-api-access-sm66f\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.233801 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.233823 4765 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.302378 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.305272 4765 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.337611 4765 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.360784 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4233242d-f981-4e9c-b8d0-0ea546d328c3" (UID: "4233242d-f981-4e9c-b8d0-0ea546d328c3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.401879 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-config-data" (OuterVolumeSpecName: "config-data") pod "4233242d-f981-4e9c-b8d0-0ea546d328c3" (UID: "4233242d-f981-4e9c-b8d0-0ea546d328c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.438714 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.438808 4765 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4233242d-f981-4e9c-b8d0-0ea546d328c3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.489891 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.537787 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.552397 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66125986-9e2e-4609-bf04-d486e27bc800","Type":"ContainerStarted","Data":"59f7a1e90459f8875ae00c8366604f549bc2ddbbc0155788f2956521294bf71c"} Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.560726 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d8e00dc-cddb-4ae9-a128-684e2ca459f7","Type":"ContainerStarted","Data":"104073e7248c1629d2cf59c297801938f6f1ed614a0517979e7434910d613885"} Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.580977 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4233242d-f981-4e9c-b8d0-0ea546d328c3","Type":"ContainerDied","Data":"f37db62c2d962a968f605c6393f46c6dc0d081f3e98d0da4549f74950b192a97"} Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.581063 4765 scope.go:117] "RemoveContainer" containerID="1a3a1b538e1d9a2858b08c392c47ea3cd1b8949624df5522dffd4885d438a96e" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.581924 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.595189 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-v7jnv" event={"ID":"e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c","Type":"ContainerStarted","Data":"582fa202cd6972c03d03347eeca2f0305bedcde853805d3ace49c7dc61a1340d"} Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.611880 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lq24p" event={"ID":"62802099-90dc-4ca1-b480-5dd33b03a17d","Type":"ContainerStarted","Data":"c3e843558227e5847d684d91b22159eb615052d46ed024128a8fcc5becd5162a"} Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.633011 4765 scope.go:117] "RemoveContainer" containerID="575d61fed73b271b7bc2060c0acb686a2fc3e396b6c9ea37e6ed4335c84091a0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.661882 4765 generic.go:334] "Generic (PLEG): container finished" podID="299980e5-044a-4ee7-a28d-b11babd43597" containerID="87c8805eaf388ddecbd04f74faeb5b5c293e75054835dd01cf0a21f3f6fe2adf" exitCode=0 Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.661976 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5135-account-create-update-drt8l" event={"ID":"299980e5-044a-4ee7-a28d-b11babd43597","Type":"ContainerDied","Data":"87c8805eaf388ddecbd04f74faeb5b5c293e75054835dd01cf0a21f3f6fe2adf"} Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.666019 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-v7jnv" podStartSLOduration=6.665997941 podStartE2EDuration="6.665997941s" podCreationTimestamp="2026-01-21 13:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:24:32.633341248 +0000 UTC m=+1333.651067100" watchObservedRunningTime="2026-01-21 13:24:32.665997941 +0000 UTC m=+1333.683723773" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.679976 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.718120 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.758881 4765 generic.go:334] "Generic (PLEG): container finished" podID="2291fe12-d21d-4050-9296-40984ce36fd3" containerID="d4afaa9160d8ad23fea28b505c38684dda32046b259c43ce3f46c093ae8fa356" exitCode=0 Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.758974 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dgs4w" event={"ID":"2291fe12-d21d-4050-9296-40984ce36fd3","Type":"ContainerDied","Data":"d4afaa9160d8ad23fea28b505c38684dda32046b259c43ce3f46c093ae8fa356"} Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.775008 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-lq24p" podStartSLOduration=7.7749764599999995 podStartE2EDuration="7.77497646s" podCreationTimestamp="2026-01-21 13:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:24:32.751653599 +0000 UTC m=+1333.769379431" watchObservedRunningTime="2026-01-21 13:24:32.77497646 +0000 UTC m=+1333.792702282" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.779829 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-444e-account-create-update-4qk7x" event={"ID":"7d203539-6d7d-4db6-803f-c1954d20a55f","Type":"ContainerStarted","Data":"aebc722b7c60400325caff016377847995cc2804c497845f299ab072a6b0b1b8"} Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.796305 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6aaf-account-create-update-xmbm9" event={"ID":"d1d066a7-4634-4680-84b3-f5bb40d939f3","Type":"ContainerStarted","Data":"40de2336141d3abcf400f4391054c7066225025ccd845c81fec7bf82ea7462fa"} Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.817167 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 13:24:32 crc kubenswrapper[4765]: E0121 13:24:32.817630 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4233242d-f981-4e9c-b8d0-0ea546d328c3" containerName="glance-httpd" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.817647 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4233242d-f981-4e9c-b8d0-0ea546d328c3" containerName="glance-httpd" Jan 21 13:24:32 crc kubenswrapper[4765]: E0121 13:24:32.817668 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4233242d-f981-4e9c-b8d0-0ea546d328c3" containerName="glance-log" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.817677 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4233242d-f981-4e9c-b8d0-0ea546d328c3" containerName="glance-log" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.817894 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="4233242d-f981-4e9c-b8d0-0ea546d328c3" containerName="glance-httpd" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.817921 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="4233242d-f981-4e9c-b8d0-0ea546d328c3" containerName="glance-log" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.818923 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.887306 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.892843 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 13:24:32 crc kubenswrapper[4765]: I0121 13:24:32.937600 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.005943 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-444e-account-create-update-4qk7x" podStartSLOduration=7.005919296 podStartE2EDuration="7.005919296s" podCreationTimestamp="2026-01-21 13:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:24:32.92755193 +0000 UTC m=+1333.945277752" watchObservedRunningTime="2026-01-21 13:24:33.005919296 +0000 UTC m=+1334.023645118" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.014477 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.014522 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-scripts\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.014571 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-logs\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.014632 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.014652 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-config-data\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.014673 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg2kx\" (UniqueName: \"kubernetes.io/projected/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-kube-api-access-dg2kx\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.014763 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.014817 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.114173 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-6aaf-account-create-update-xmbm9" podStartSLOduration=8.114153443 podStartE2EDuration="8.114153443s" podCreationTimestamp="2026-01-21 13:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:24:33.019033578 +0000 UTC m=+1334.036759410" watchObservedRunningTime="2026-01-21 13:24:33.114153443 +0000 UTC m=+1334.131879255" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.116679 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.116724 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-scripts\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.116766 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-logs\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.116809 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.116826 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-config-data\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.116845 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg2kx\" (UniqueName: \"kubernetes.io/projected/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-kube-api-access-dg2kx\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.116893 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.116932 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.137458 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-logs\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.138062 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.138518 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.144970 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.168052 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.168105 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-scripts\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.196790 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-config-data\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.197715 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg2kx\" (UniqueName: \"kubernetes.io/projected/85a4c5bc-cacf-4c49-b285-295c9bfb7b74-kube-api-access-dg2kx\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.249373 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-c67b7f46c-vdfh2" podUID="dcc230e6-cf6d-4fc2-bea2-9ba2b028716b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.273084 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"85a4c5bc-cacf-4c49-b285-295c9bfb7b74\") " pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.281082 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.281136 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.282444 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-c67b7f46c-vdfh2" podUID="dcc230e6-cf6d-4fc2-bea2-9ba2b028716b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.314057 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-c67b7f46c-vdfh2" podUID="dcc230e6-cf6d-4fc2-bea2-9ba2b028716b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.378365 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.378431 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.477466 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.529958 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.638341 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4233242d-f981-4e9c-b8d0-0ea546d328c3" path="/var/lib/kubelet/pods/4233242d-f981-4e9c-b8d0-0ea546d328c3/volumes" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.639741 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="677ee428-97c3-4ee7-a68b-8eb406f5734c" path="/var/lib/kubelet/pods/677ee428-97c3-4ee7-a68b-8eb406f5734c/volumes" Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.883738 4765 generic.go:334] "Generic (PLEG): container finished" podID="e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c" containerID="bbb972b66db43b6e66d81b1563e861b0dbbe0dfe33feb298cbaf499dc6d2f21f" exitCode=0 Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.884070 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-v7jnv" event={"ID":"e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c","Type":"ContainerDied","Data":"bbb972b66db43b6e66d81b1563e861b0dbbe0dfe33feb298cbaf499dc6d2f21f"} Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.897528 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"165f5e89-08b4-465c-acc6-52d76f9c0db0","Type":"ContainerStarted","Data":"046dd9684f11266f51f7187dcdc1cef2cd690ff58b8c30e2ecc7ce8f6230231f"} Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.960614 4765 generic.go:334] "Generic (PLEG): container finished" podID="62802099-90dc-4ca1-b480-5dd33b03a17d" containerID="75537113ffa7f9b977ff2c7e4e8e71e502ce21e5ed0c5d09a50950eaf45b6d8d" exitCode=0 Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.960711 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lq24p" event={"ID":"62802099-90dc-4ca1-b480-5dd33b03a17d","Type":"ContainerDied","Data":"75537113ffa7f9b977ff2c7e4e8e71e502ce21e5ed0c5d09a50950eaf45b6d8d"} Jan 21 13:24:33 crc kubenswrapper[4765]: I0121 13:24:33.992074 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6aaf-account-create-update-xmbm9" event={"ID":"d1d066a7-4634-4680-84b3-f5bb40d939f3","Type":"ContainerStarted","Data":"69158d1fe50f2ce7919262c6f0f42c64a0f0afa79f20bfc41ba3486cb4fec69c"} Jan 21 13:24:34 crc kubenswrapper[4765]: I0121 13:24:34.013890 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d8e00dc-cddb-4ae9-a128-684e2ca459f7","Type":"ContainerStarted","Data":"dcd9f07547feffc5d377e86bf7cb71719549368032d9fa48d571cc3d2746f4a4"} Jan 21 13:24:34 crc kubenswrapper[4765]: I0121 13:24:34.083721 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-444e-account-create-update-4qk7x" event={"ID":"7d203539-6d7d-4db6-803f-c1954d20a55f","Type":"ContainerStarted","Data":"21262357282bbb7af9ffdfa83d9ad2025f6b0adf0775e91fab3bbb72d1a548a0"} Jan 21 13:24:34 crc kubenswrapper[4765]: I0121 13:24:34.097505 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66125986-9e2e-4609-bf04-d486e27bc800","Type":"ContainerStarted","Data":"20bf913cad2109f76b86fae11c0718166c0e9131982c41bd9024803a7d1f9ae6"} Jan 21 13:24:34 crc kubenswrapper[4765]: I0121 13:24:34.512052 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.132805 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66125986-9e2e-4609-bf04-d486e27bc800","Type":"ContainerStarted","Data":"f7f34889f6657468f0e502ffae9d763d11e1d2b4329c9fb0f4a8c9515478a32e"} Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.141700 4765 generic.go:334] "Generic (PLEG): container finished" podID="7d203539-6d7d-4db6-803f-c1954d20a55f" containerID="21262357282bbb7af9ffdfa83d9ad2025f6b0adf0775e91fab3bbb72d1a548a0" exitCode=0 Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.141758 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-444e-account-create-update-4qk7x" event={"ID":"7d203539-6d7d-4db6-803f-c1954d20a55f","Type":"ContainerDied","Data":"21262357282bbb7af9ffdfa83d9ad2025f6b0adf0775e91fab3bbb72d1a548a0"} Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.171657 4765 generic.go:334] "Generic (PLEG): container finished" podID="d1d066a7-4634-4680-84b3-f5bb40d939f3" containerID="69158d1fe50f2ce7919262c6f0f42c64a0f0afa79f20bfc41ba3486cb4fec69c" exitCode=0 Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.171736 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6aaf-account-create-update-xmbm9" event={"ID":"d1d066a7-4634-4680-84b3-f5bb40d939f3","Type":"ContainerDied","Data":"69158d1fe50f2ce7919262c6f0f42c64a0f0afa79f20bfc41ba3486cb4fec69c"} Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.177659 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85a4c5bc-cacf-4c49-b285-295c9bfb7b74","Type":"ContainerStarted","Data":"df53f593c9609adc2c304adfa68e41630e9660ac6a5aac01c9eeecfed6fc4369"} Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.234028 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dgs4w" Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.235989 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5135-account-create-update-drt8l" Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.347012 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2291fe12-d21d-4050-9296-40984ce36fd3-operator-scripts\") pod \"2291fe12-d21d-4050-9296-40984ce36fd3\" (UID: \"2291fe12-d21d-4050-9296-40984ce36fd3\") " Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.347059 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/299980e5-044a-4ee7-a28d-b11babd43597-operator-scripts\") pod \"299980e5-044a-4ee7-a28d-b11babd43597\" (UID: \"299980e5-044a-4ee7-a28d-b11babd43597\") " Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.347260 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bx47\" (UniqueName: \"kubernetes.io/projected/2291fe12-d21d-4050-9296-40984ce36fd3-kube-api-access-7bx47\") pod \"2291fe12-d21d-4050-9296-40984ce36fd3\" (UID: \"2291fe12-d21d-4050-9296-40984ce36fd3\") " Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.347319 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hln7m\" (UniqueName: \"kubernetes.io/projected/299980e5-044a-4ee7-a28d-b11babd43597-kube-api-access-hln7m\") pod \"299980e5-044a-4ee7-a28d-b11babd43597\" (UID: \"299980e5-044a-4ee7-a28d-b11babd43597\") " Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.354376 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/299980e5-044a-4ee7-a28d-b11babd43597-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "299980e5-044a-4ee7-a28d-b11babd43597" (UID: "299980e5-044a-4ee7-a28d-b11babd43597"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.354750 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2291fe12-d21d-4050-9296-40984ce36fd3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2291fe12-d21d-4050-9296-40984ce36fd3" (UID: "2291fe12-d21d-4050-9296-40984ce36fd3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.355221 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/299980e5-044a-4ee7-a28d-b11babd43597-kube-api-access-hln7m" (OuterVolumeSpecName: "kube-api-access-hln7m") pod "299980e5-044a-4ee7-a28d-b11babd43597" (UID: "299980e5-044a-4ee7-a28d-b11babd43597"). InnerVolumeSpecName "kube-api-access-hln7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.365702 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2291fe12-d21d-4050-9296-40984ce36fd3-kube-api-access-7bx47" (OuterVolumeSpecName: "kube-api-access-7bx47") pod "2291fe12-d21d-4050-9296-40984ce36fd3" (UID: "2291fe12-d21d-4050-9296-40984ce36fd3"). InnerVolumeSpecName "kube-api-access-7bx47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.459879 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/299980e5-044a-4ee7-a28d-b11babd43597-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.460161 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bx47\" (UniqueName: \"kubernetes.io/projected/2291fe12-d21d-4050-9296-40984ce36fd3-kube-api-access-7bx47\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.460177 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hln7m\" (UniqueName: \"kubernetes.io/projected/299980e5-044a-4ee7-a28d-b11babd43597-kube-api-access-hln7m\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:35 crc kubenswrapper[4765]: I0121 13:24:35.460187 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2291fe12-d21d-4050-9296-40984ce36fd3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.126179 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lq24p" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.233118 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62802099-90dc-4ca1-b480-5dd33b03a17d-operator-scripts\") pod \"62802099-90dc-4ca1-b480-5dd33b03a17d\" (UID: \"62802099-90dc-4ca1-b480-5dd33b03a17d\") " Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.233180 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqpzv\" (UniqueName: \"kubernetes.io/projected/62802099-90dc-4ca1-b480-5dd33b03a17d-kube-api-access-mqpzv\") pod \"62802099-90dc-4ca1-b480-5dd33b03a17d\" (UID: \"62802099-90dc-4ca1-b480-5dd33b03a17d\") " Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.238176 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62802099-90dc-4ca1-b480-5dd33b03a17d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62802099-90dc-4ca1-b480-5dd33b03a17d" (UID: "62802099-90dc-4ca1-b480-5dd33b03a17d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.252500 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62802099-90dc-4ca1-b480-5dd33b03a17d-kube-api-access-mqpzv" (OuterVolumeSpecName: "kube-api-access-mqpzv") pod "62802099-90dc-4ca1-b480-5dd33b03a17d" (UID: "62802099-90dc-4ca1-b480-5dd33b03a17d"). InnerVolumeSpecName "kube-api-access-mqpzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.311389 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"165f5e89-08b4-465c-acc6-52d76f9c0db0","Type":"ContainerStarted","Data":"7455c52eb2a4b85deaa8eff3d8db407986729f82a09aeb64cc342fd6de58f59f"} Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.331904 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d8e00dc-cddb-4ae9-a128-684e2ca459f7","Type":"ContainerStarted","Data":"2fcfc8b1cba4fbf987cc433f49a5cf6108bca134b40807f6f4769a262d82283f"} Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.334453 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62802099-90dc-4ca1-b480-5dd33b03a17d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.344774 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqpzv\" (UniqueName: \"kubernetes.io/projected/62802099-90dc-4ca1-b480-5dd33b03a17d-kube-api-access-mqpzv\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.364279 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lq24p" event={"ID":"62802099-90dc-4ca1-b480-5dd33b03a17d","Type":"ContainerDied","Data":"c3e843558227e5847d684d91b22159eb615052d46ed024128a8fcc5becd5162a"} Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.364323 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3e843558227e5847d684d91b22159eb615052d46ed024128a8fcc5becd5162a" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.364286 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6aaf-account-create-update-xmbm9" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.364439 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lq24p" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.366080 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-444e-account-create-update-4qk7x" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.396236 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5135-account-create-update-drt8l" event={"ID":"299980e5-044a-4ee7-a28d-b11babd43597","Type":"ContainerDied","Data":"56eed51f509a71e6e8c6bd621cafba5a12ae6c6a2c78a30bc48c9bb90b88a397"} Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.396282 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56eed51f509a71e6e8c6bd621cafba5a12ae6c6a2c78a30bc48c9bb90b88a397" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.396349 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5135-account-create-update-drt8l" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.398900 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dgs4w" event={"ID":"2291fe12-d21d-4050-9296-40984ce36fd3","Type":"ContainerDied","Data":"ad2b93d7aa5e770ed1b374d15ebb6aaa4a40c92cf49f19c0c019a0d3e01526b2"} Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.398929 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad2b93d7aa5e770ed1b374d15ebb6aaa4a40c92cf49f19c0c019a0d3e01526b2" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.398978 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dgs4w" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.417930 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=17.417903993 podStartE2EDuration="17.417903993s" podCreationTimestamp="2026-01-21 13:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:24:36.368903874 +0000 UTC m=+1337.386629696" watchObservedRunningTime="2026-01-21 13:24:36.417903993 +0000 UTC m=+1337.435629815" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.446729 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8kgd\" (UniqueName: \"kubernetes.io/projected/7d203539-6d7d-4db6-803f-c1954d20a55f-kube-api-access-t8kgd\") pod \"7d203539-6d7d-4db6-803f-c1954d20a55f\" (UID: \"7d203539-6d7d-4db6-803f-c1954d20a55f\") " Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.446856 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d203539-6d7d-4db6-803f-c1954d20a55f-operator-scripts\") pod \"7d203539-6d7d-4db6-803f-c1954d20a55f\" (UID: \"7d203539-6d7d-4db6-803f-c1954d20a55f\") " Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.446908 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxdkq\" (UniqueName: \"kubernetes.io/projected/d1d066a7-4634-4680-84b3-f5bb40d939f3-kube-api-access-jxdkq\") pod \"d1d066a7-4634-4680-84b3-f5bb40d939f3\" (UID: \"d1d066a7-4634-4680-84b3-f5bb40d939f3\") " Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.446994 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1d066a7-4634-4680-84b3-f5bb40d939f3-operator-scripts\") pod \"d1d066a7-4634-4680-84b3-f5bb40d939f3\" (UID: \"d1d066a7-4634-4680-84b3-f5bb40d939f3\") " Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.450586 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d203539-6d7d-4db6-803f-c1954d20a55f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d203539-6d7d-4db6-803f-c1954d20a55f" (UID: "7d203539-6d7d-4db6-803f-c1954d20a55f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.465210 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1d066a7-4634-4680-84b3-f5bb40d939f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1d066a7-4634-4680-84b3-f5bb40d939f3" (UID: "d1d066a7-4634-4680-84b3-f5bb40d939f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.473081 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d203539-6d7d-4db6-803f-c1954d20a55f-kube-api-access-t8kgd" (OuterVolumeSpecName: "kube-api-access-t8kgd") pod "7d203539-6d7d-4db6-803f-c1954d20a55f" (UID: "7d203539-6d7d-4db6-803f-c1954d20a55f"). InnerVolumeSpecName "kube-api-access-t8kgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.473925 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d066a7-4634-4680-84b3-f5bb40d939f3-kube-api-access-jxdkq" (OuterVolumeSpecName: "kube-api-access-jxdkq") pod "d1d066a7-4634-4680-84b3-f5bb40d939f3" (UID: "d1d066a7-4634-4680-84b3-f5bb40d939f3"). InnerVolumeSpecName "kube-api-access-jxdkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.509556 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v7jnv" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.548448 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c-operator-scripts\") pod \"e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c\" (UID: \"e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c\") " Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.548682 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsmvk\" (UniqueName: \"kubernetes.io/projected/e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c-kube-api-access-tsmvk\") pod \"e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c\" (UID: \"e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c\") " Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.549360 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8kgd\" (UniqueName: \"kubernetes.io/projected/7d203539-6d7d-4db6-803f-c1954d20a55f-kube-api-access-t8kgd\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.549391 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d203539-6d7d-4db6-803f-c1954d20a55f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.549404 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxdkq\" (UniqueName: \"kubernetes.io/projected/d1d066a7-4634-4680-84b3-f5bb40d939f3-kube-api-access-jxdkq\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.549417 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1d066a7-4634-4680-84b3-f5bb40d939f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.550563 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c" (UID: "e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.558376 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c-kube-api-access-tsmvk" (OuterVolumeSpecName: "kube-api-access-tsmvk") pod "e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c" (UID: "e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c"). InnerVolumeSpecName "kube-api-access-tsmvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.651901 4765 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:36 crc kubenswrapper[4765]: I0121 13:24:36.651947 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsmvk\" (UniqueName: \"kubernetes.io/projected/e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c-kube-api-access-tsmvk\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:37 crc kubenswrapper[4765]: I0121 13:24:37.436131 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-444e-account-create-update-4qk7x" event={"ID":"7d203539-6d7d-4db6-803f-c1954d20a55f","Type":"ContainerDied","Data":"aebc722b7c60400325caff016377847995cc2804c497845f299ab072a6b0b1b8"} Jan 21 13:24:37 crc kubenswrapper[4765]: I0121 13:24:37.436575 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aebc722b7c60400325caff016377847995cc2804c497845f299ab072a6b0b1b8" Jan 21 13:24:37 crc kubenswrapper[4765]: I0121 13:24:37.436651 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-444e-account-create-update-4qk7x" Jan 21 13:24:37 crc kubenswrapper[4765]: I0121 13:24:37.446483 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6aaf-account-create-update-xmbm9" event={"ID":"d1d066a7-4634-4680-84b3-f5bb40d939f3","Type":"ContainerDied","Data":"40de2336141d3abcf400f4391054c7066225025ccd845c81fec7bf82ea7462fa"} Jan 21 13:24:37 crc kubenswrapper[4765]: I0121 13:24:37.446539 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40de2336141d3abcf400f4391054c7066225025ccd845c81fec7bf82ea7462fa" Jan 21 13:24:37 crc kubenswrapper[4765]: I0121 13:24:37.446657 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6aaf-account-create-update-xmbm9" Jan 21 13:24:37 crc kubenswrapper[4765]: I0121 13:24:37.451588 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-v7jnv" event={"ID":"e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c","Type":"ContainerDied","Data":"582fa202cd6972c03d03347eeca2f0305bedcde853805d3ace49c7dc61a1340d"} Jan 21 13:24:37 crc kubenswrapper[4765]: I0121 13:24:37.451636 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="582fa202cd6972c03d03347eeca2f0305bedcde853805d3ace49c7dc61a1340d" Jan 21 13:24:37 crc kubenswrapper[4765]: I0121 13:24:37.451721 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-v7jnv" Jan 21 13:24:37 crc kubenswrapper[4765]: I0121 13:24:37.486978 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85a4c5bc-cacf-4c49-b285-295c9bfb7b74","Type":"ContainerStarted","Data":"85e2d1bb93b89c8896c6c13237b867f5a5dd47eb677e2b7263c3e65bb9c3f6d1"} Jan 21 13:24:37 crc kubenswrapper[4765]: I0121 13:24:37.514385 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66125986-9e2e-4609-bf04-d486e27bc800","Type":"ContainerStarted","Data":"b1d92f5f4ec9db9e8fd56da14e79ec394605ef60b83afdade54cbeaecad62a00"} Jan 21 13:24:37 crc kubenswrapper[4765]: I0121 13:24:37.527924 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"165f5e89-08b4-465c-acc6-52d76f9c0db0","Type":"ContainerStarted","Data":"dad6951d175232a754cf970a679526e7cdd931f5bc66ec19b8f3370b21ce5802"} Jan 21 13:24:37 crc kubenswrapper[4765]: I0121 13:24:37.567818 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.567800571 podStartE2EDuration="6.567800571s" podCreationTimestamp="2026-01-21 13:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:24:37.557973385 +0000 UTC m=+1338.575699207" watchObservedRunningTime="2026-01-21 13:24:37.567800571 +0000 UTC m=+1338.585526393" Jan 21 13:24:38 crc kubenswrapper[4765]: I0121 13:24:38.326481 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:38 crc kubenswrapper[4765]: I0121 13:24:38.350795 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-c67b7f46c-vdfh2" Jan 21 13:24:38 crc kubenswrapper[4765]: I0121 13:24:38.567839 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85a4c5bc-cacf-4c49-b285-295c9bfb7b74","Type":"ContainerStarted","Data":"e7f83b898974d9f58e98937f5a51bbdbd37fde382187e643de339a31a0fdcc74"} Jan 21 13:24:38 crc kubenswrapper[4765]: I0121 13:24:38.606319 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.606293011 podStartE2EDuration="6.606293011s" podCreationTimestamp="2026-01-21 13:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:24:38.600616236 +0000 UTC m=+1339.618342058" watchObservedRunningTime="2026-01-21 13:24:38.606293011 +0000 UTC m=+1339.624018853" Jan 21 13:24:39 crc kubenswrapper[4765]: I0121 13:24:39.668138 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66125986-9e2e-4609-bf04-d486e27bc800","Type":"ContainerStarted","Data":"8985004effba907a391e33da10b72f8b10c67a2d8cd84518df1ad973f4928d44"} Jan 21 13:24:39 crc kubenswrapper[4765]: I0121 13:24:39.668283 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 13:24:39 crc kubenswrapper[4765]: I0121 13:24:39.709924 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.3117572299999996 podStartE2EDuration="9.709901297s" podCreationTimestamp="2026-01-21 13:24:30 +0000 UTC" firstStartedPulling="2026-01-21 13:24:32.537497733 +0000 UTC m=+1333.555223555" lastFinishedPulling="2026-01-21 13:24:37.93564181 +0000 UTC m=+1338.953367622" observedRunningTime="2026-01-21 13:24:39.702687677 +0000 UTC m=+1340.720413509" watchObservedRunningTime="2026-01-21 13:24:39.709901297 +0000 UTC m=+1340.727627129" Jan 21 13:24:40 crc kubenswrapper[4765]: I0121 13:24:40.311436 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 13:24:40 crc kubenswrapper[4765]: I0121 13:24:40.764957 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.620485 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2ks5c"] Jan 21 13:24:41 crc kubenswrapper[4765]: E0121 13:24:41.621116 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2291fe12-d21d-4050-9296-40984ce36fd3" containerName="mariadb-database-create" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.621133 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="2291fe12-d21d-4050-9296-40984ce36fd3" containerName="mariadb-database-create" Jan 21 13:24:41 crc kubenswrapper[4765]: E0121 13:24:41.621146 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d203539-6d7d-4db6-803f-c1954d20a55f" containerName="mariadb-account-create-update" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.621154 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d203539-6d7d-4db6-803f-c1954d20a55f" containerName="mariadb-account-create-update" Jan 21 13:24:41 crc kubenswrapper[4765]: E0121 13:24:41.621163 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62802099-90dc-4ca1-b480-5dd33b03a17d" containerName="mariadb-database-create" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.621170 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="62802099-90dc-4ca1-b480-5dd33b03a17d" containerName="mariadb-database-create" Jan 21 13:24:41 crc kubenswrapper[4765]: E0121 13:24:41.621180 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c" containerName="mariadb-database-create" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.621185 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c" containerName="mariadb-database-create" Jan 21 13:24:41 crc kubenswrapper[4765]: E0121 13:24:41.621213 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d066a7-4634-4680-84b3-f5bb40d939f3" containerName="mariadb-account-create-update" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.621219 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d066a7-4634-4680-84b3-f5bb40d939f3" containerName="mariadb-account-create-update" Jan 21 13:24:41 crc kubenswrapper[4765]: E0121 13:24:41.621245 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="299980e5-044a-4ee7-a28d-b11babd43597" containerName="mariadb-account-create-update" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.621251 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="299980e5-044a-4ee7-a28d-b11babd43597" containerName="mariadb-account-create-update" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.621418 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d066a7-4634-4680-84b3-f5bb40d939f3" containerName="mariadb-account-create-update" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.621441 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="62802099-90dc-4ca1-b480-5dd33b03a17d" containerName="mariadb-database-create" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.621451 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="2291fe12-d21d-4050-9296-40984ce36fd3" containerName="mariadb-database-create" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.621459 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="299980e5-044a-4ee7-a28d-b11babd43597" containerName="mariadb-account-create-update" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.621468 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d203539-6d7d-4db6-803f-c1954d20a55f" containerName="mariadb-account-create-update" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.621478 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c" containerName="mariadb-database-create" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.622049 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.631512 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.636033 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.636153 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zlzpq" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.639664 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2ks5c"] Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.755978 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-scripts\") pod \"nova-cell0-conductor-db-sync-2ks5c\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.756049 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2ks5c\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.756083 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-config-data\") pod \"nova-cell0-conductor-db-sync-2ks5c\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.756279 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdpgw\" (UniqueName: \"kubernetes.io/projected/3cbd47b6-cd86-4ff3-a374-4863622fefad-kube-api-access-wdpgw\") pod \"nova-cell0-conductor-db-sync-2ks5c\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.857459 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdpgw\" (UniqueName: \"kubernetes.io/projected/3cbd47b6-cd86-4ff3-a374-4863622fefad-kube-api-access-wdpgw\") pod \"nova-cell0-conductor-db-sync-2ks5c\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.857525 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-scripts\") pod \"nova-cell0-conductor-db-sync-2ks5c\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.857595 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2ks5c\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.857622 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-config-data\") pod \"nova-cell0-conductor-db-sync-2ks5c\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.863643 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-config-data\") pod \"nova-cell0-conductor-db-sync-2ks5c\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.865616 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-scripts\") pod \"nova-cell0-conductor-db-sync-2ks5c\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.866675 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2ks5c\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.878352 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdpgw\" (UniqueName: \"kubernetes.io/projected/3cbd47b6-cd86-4ff3-a374-4863622fefad-kube-api-access-wdpgw\") pod \"nova-cell0-conductor-db-sync-2ks5c\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:24:41 crc kubenswrapper[4765]: I0121 13:24:41.949939 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:24:42 crc kubenswrapper[4765]: I0121 13:24:42.303118 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 13:24:42 crc kubenswrapper[4765]: I0121 13:24:42.304802 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 13:24:42 crc kubenswrapper[4765]: I0121 13:24:42.418686 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 13:24:42 crc kubenswrapper[4765]: I0121 13:24:42.426147 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 13:24:42 crc kubenswrapper[4765]: I0121 13:24:42.612603 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2ks5c"] Jan 21 13:24:42 crc kubenswrapper[4765]: I0121 13:24:42.759850 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2ks5c" event={"ID":"3cbd47b6-cd86-4ff3-a374-4863622fefad","Type":"ContainerStarted","Data":"2a8b86309b5a97f415e2947562e19ef948a3c1c78a607bf2d7fe2ac9d767d7d9"} Jan 21 13:24:42 crc kubenswrapper[4765]: I0121 13:24:42.759907 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 13:24:42 crc kubenswrapper[4765]: I0121 13:24:42.760023 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 13:24:43 crc kubenswrapper[4765]: I0121 13:24:43.281424 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 21 13:24:43 crc kubenswrapper[4765]: I0121 13:24:43.382663 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c57777f6-gqpgv" podUID="1241b1f0-34c1-401a-b91f-13b72926cc2c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 21 13:24:43 crc kubenswrapper[4765]: I0121 13:24:43.531398 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 13:24:43 crc kubenswrapper[4765]: I0121 13:24:43.531460 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 13:24:43 crc kubenswrapper[4765]: I0121 13:24:43.590892 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 13:24:43 crc kubenswrapper[4765]: I0121 13:24:43.601785 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 13:24:43 crc kubenswrapper[4765]: I0121 13:24:43.776038 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 13:24:43 crc kubenswrapper[4765]: I0121 13:24:43.776486 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 13:24:44 crc kubenswrapper[4765]: I0121 13:24:44.787402 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 13:24:44 crc kubenswrapper[4765]: I0121 13:24:44.787429 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 13:24:45 crc kubenswrapper[4765]: I0121 13:24:45.802557 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 13:24:45 crc kubenswrapper[4765]: I0121 13:24:45.802878 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 13:24:46 crc kubenswrapper[4765]: I0121 13:24:46.972981 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:46 crc kubenswrapper[4765]: I0121 13:24:46.973601 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="ceilometer-central-agent" containerID="cri-o://20bf913cad2109f76b86fae11c0718166c0e9131982c41bd9024803a7d1f9ae6" gracePeriod=30 Jan 21 13:24:46 crc kubenswrapper[4765]: I0121 13:24:46.973755 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="proxy-httpd" containerID="cri-o://8985004effba907a391e33da10b72f8b10c67a2d8cd84518df1ad973f4928d44" gracePeriod=30 Jan 21 13:24:46 crc kubenswrapper[4765]: I0121 13:24:46.973814 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="sg-core" containerID="cri-o://b1d92f5f4ec9db9e8fd56da14e79ec394605ef60b83afdade54cbeaecad62a00" gracePeriod=30 Jan 21 13:24:46 crc kubenswrapper[4765]: I0121 13:24:46.973862 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="ceilometer-notification-agent" containerID="cri-o://f7f34889f6657468f0e502ffae9d763d11e1d2b4329c9fb0f4a8c9515478a32e" gracePeriod=30 Jan 21 13:24:47 crc kubenswrapper[4765]: I0121 13:24:47.818124 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 13:24:47 crc kubenswrapper[4765]: I0121 13:24:47.818469 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 13:24:47 crc kubenswrapper[4765]: I0121 13:24:47.926089 4765 generic.go:334] "Generic (PLEG): container finished" podID="66125986-9e2e-4609-bf04-d486e27bc800" containerID="8985004effba907a391e33da10b72f8b10c67a2d8cd84518df1ad973f4928d44" exitCode=0 Jan 21 13:24:47 crc kubenswrapper[4765]: I0121 13:24:47.926121 4765 generic.go:334] "Generic (PLEG): container finished" podID="66125986-9e2e-4609-bf04-d486e27bc800" containerID="b1d92f5f4ec9db9e8fd56da14e79ec394605ef60b83afdade54cbeaecad62a00" exitCode=2 Jan 21 13:24:47 crc kubenswrapper[4765]: I0121 13:24:47.926129 4765 generic.go:334] "Generic (PLEG): container finished" podID="66125986-9e2e-4609-bf04-d486e27bc800" containerID="f7f34889f6657468f0e502ffae9d763d11e1d2b4329c9fb0f4a8c9515478a32e" exitCode=0 Jan 21 13:24:47 crc kubenswrapper[4765]: I0121 13:24:47.926136 4765 generic.go:334] "Generic (PLEG): container finished" podID="66125986-9e2e-4609-bf04-d486e27bc800" containerID="20bf913cad2109f76b86fae11c0718166c0e9131982c41bd9024803a7d1f9ae6" exitCode=0 Jan 21 13:24:47 crc kubenswrapper[4765]: I0121 13:24:47.926158 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66125986-9e2e-4609-bf04-d486e27bc800","Type":"ContainerDied","Data":"8985004effba907a391e33da10b72f8b10c67a2d8cd84518df1ad973f4928d44"} Jan 21 13:24:47 crc kubenswrapper[4765]: I0121 13:24:47.926185 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66125986-9e2e-4609-bf04-d486e27bc800","Type":"ContainerDied","Data":"b1d92f5f4ec9db9e8fd56da14e79ec394605ef60b83afdade54cbeaecad62a00"} Jan 21 13:24:47 crc kubenswrapper[4765]: I0121 13:24:47.926196 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66125986-9e2e-4609-bf04-d486e27bc800","Type":"ContainerDied","Data":"f7f34889f6657468f0e502ffae9d763d11e1d2b4329c9fb0f4a8c9515478a32e"} Jan 21 13:24:47 crc kubenswrapper[4765]: I0121 13:24:47.926219 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66125986-9e2e-4609-bf04-d486e27bc800","Type":"ContainerDied","Data":"20bf913cad2109f76b86fae11c0718166c0e9131982c41bd9024803a7d1f9ae6"} Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.143630 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.352061 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.352620 4765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.407539 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.492170 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmvkp\" (UniqueName: \"kubernetes.io/projected/66125986-9e2e-4609-bf04-d486e27bc800-kube-api-access-lmvkp\") pod \"66125986-9e2e-4609-bf04-d486e27bc800\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.492279 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-scripts\") pod \"66125986-9e2e-4609-bf04-d486e27bc800\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.492333 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66125986-9e2e-4609-bf04-d486e27bc800-log-httpd\") pod \"66125986-9e2e-4609-bf04-d486e27bc800\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.492354 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-config-data\") pod \"66125986-9e2e-4609-bf04-d486e27bc800\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.492472 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-sg-core-conf-yaml\") pod \"66125986-9e2e-4609-bf04-d486e27bc800\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.492548 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66125986-9e2e-4609-bf04-d486e27bc800-run-httpd\") pod \"66125986-9e2e-4609-bf04-d486e27bc800\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.492574 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-combined-ca-bundle\") pod \"66125986-9e2e-4609-bf04-d486e27bc800\" (UID: \"66125986-9e2e-4609-bf04-d486e27bc800\") " Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.497096 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66125986-9e2e-4609-bf04-d486e27bc800-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "66125986-9e2e-4609-bf04-d486e27bc800" (UID: "66125986-9e2e-4609-bf04-d486e27bc800"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.522314 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66125986-9e2e-4609-bf04-d486e27bc800-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "66125986-9e2e-4609-bf04-d486e27bc800" (UID: "66125986-9e2e-4609-bf04-d486e27bc800"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.597234 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-scripts" (OuterVolumeSpecName: "scripts") pod "66125986-9e2e-4609-bf04-d486e27bc800" (UID: "66125986-9e2e-4609-bf04-d486e27bc800"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.597651 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66125986-9e2e-4609-bf04-d486e27bc800-kube-api-access-lmvkp" (OuterVolumeSpecName: "kube-api-access-lmvkp") pod "66125986-9e2e-4609-bf04-d486e27bc800" (UID: "66125986-9e2e-4609-bf04-d486e27bc800"). InnerVolumeSpecName "kube-api-access-lmvkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.609238 4765 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66125986-9e2e-4609-bf04-d486e27bc800-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.609308 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmvkp\" (UniqueName: \"kubernetes.io/projected/66125986-9e2e-4609-bf04-d486e27bc800-kube-api-access-lmvkp\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.609341 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.609372 4765 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66125986-9e2e-4609-bf04-d486e27bc800-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.723455 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66125986-9e2e-4609-bf04-d486e27bc800" (UID: "66125986-9e2e-4609-bf04-d486e27bc800"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.745702 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "66125986-9e2e-4609-bf04-d486e27bc800" (UID: "66125986-9e2e-4609-bf04-d486e27bc800"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.797419 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-config-data" (OuterVolumeSpecName: "config-data") pod "66125986-9e2e-4609-bf04-d486e27bc800" (UID: "66125986-9e2e-4609-bf04-d486e27bc800"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.814779 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.814848 4765 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.814865 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66125986-9e2e-4609-bf04-d486e27bc800-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.961073 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66125986-9e2e-4609-bf04-d486e27bc800","Type":"ContainerDied","Data":"59f7a1e90459f8875ae00c8366604f549bc2ddbbc0155788f2956521294bf71c"} Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.961127 4765 scope.go:117] "RemoveContainer" containerID="8985004effba907a391e33da10b72f8b10c67a2d8cd84518df1ad973f4928d44" Jan 21 13:24:48 crc kubenswrapper[4765]: I0121 13:24:48.961357 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.009021 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.029018 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.047116 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:49 crc kubenswrapper[4765]: E0121 13:24:49.047719 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="ceilometer-central-agent" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.047740 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="ceilometer-central-agent" Jan 21 13:24:49 crc kubenswrapper[4765]: E0121 13:24:49.047763 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="ceilometer-notification-agent" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.047769 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="ceilometer-notification-agent" Jan 21 13:24:49 crc kubenswrapper[4765]: E0121 13:24:49.047790 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="sg-core" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.047800 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="sg-core" Jan 21 13:24:49 crc kubenswrapper[4765]: E0121 13:24:49.047819 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="proxy-httpd" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.047825 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="proxy-httpd" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.048017 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="ceilometer-central-agent" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.048033 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="proxy-httpd" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.048052 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="ceilometer-notification-agent" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.048063 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="66125986-9e2e-4609-bf04-d486e27bc800" containerName="sg-core" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.049866 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.053057 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.053444 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.121305 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5jb2\" (UniqueName: \"kubernetes.io/projected/c4b3f4bb-115d-4343-9ab1-a693826c14d6-kube-api-access-d5jb2\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.121352 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4b3f4bb-115d-4343-9ab1-a693826c14d6-log-httpd\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.121381 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-config-data\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.121404 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-scripts\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.121472 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4b3f4bb-115d-4343-9ab1-a693826c14d6-run-httpd\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.121529 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.121568 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.126659 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.223179 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-scripts\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.223365 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4b3f4bb-115d-4343-9ab1-a693826c14d6-run-httpd\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.223446 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.223494 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.223567 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5jb2\" (UniqueName: \"kubernetes.io/projected/c4b3f4bb-115d-4343-9ab1-a693826c14d6-kube-api-access-d5jb2\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.223606 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4b3f4bb-115d-4343-9ab1-a693826c14d6-log-httpd\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.223643 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-config-data\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.225175 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4b3f4bb-115d-4343-9ab1-a693826c14d6-run-httpd\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.227915 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4b3f4bb-115d-4343-9ab1-a693826c14d6-log-httpd\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.232291 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-scripts\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.232421 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.232931 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-config-data\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.234240 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.247485 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5jb2\" (UniqueName: \"kubernetes.io/projected/c4b3f4bb-115d-4343-9ab1-a693826c14d6-kube-api-access-d5jb2\") pod \"ceilometer-0\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.313867 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.374821 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:24:49 crc kubenswrapper[4765]: I0121 13:24:49.628366 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66125986-9e2e-4609-bf04-d486e27bc800" path="/var/lib/kubelet/pods/66125986-9e2e-4609-bf04-d486e27bc800/volumes" Jan 21 13:24:53 crc kubenswrapper[4765]: I0121 13:24:53.282632 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 21 13:24:53 crc kubenswrapper[4765]: I0121 13:24:53.377501 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c57777f6-gqpgv" podUID="1241b1f0-34c1-401a-b91f-13b72926cc2c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 21 13:24:57 crc kubenswrapper[4765]: I0121 13:24:57.021386 4765 scope.go:117] "RemoveContainer" containerID="b1d92f5f4ec9db9e8fd56da14e79ec394605ef60b83afdade54cbeaecad62a00" Jan 21 13:24:57 crc kubenswrapper[4765]: I0121 13:24:57.058847 4765 scope.go:117] "RemoveContainer" containerID="f7f34889f6657468f0e502ffae9d763d11e1d2b4329c9fb0f4a8c9515478a32e" Jan 21 13:24:57 crc kubenswrapper[4765]: I0121 13:24:57.156350 4765 scope.go:117] "RemoveContainer" containerID="20bf913cad2109f76b86fae11c0718166c0e9131982c41bd9024803a7d1f9ae6" Jan 21 13:24:57 crc kubenswrapper[4765]: I0121 13:24:57.608527 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:24:57 crc kubenswrapper[4765]: W0121 13:24:57.626429 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4b3f4bb_115d_4343_9ab1_a693826c14d6.slice/crio-2adc8d45ffca37f5a4eb6d962639231e437f9944240ef2aafc818cd617e240f7 WatchSource:0}: Error finding container 2adc8d45ffca37f5a4eb6d962639231e437f9944240ef2aafc818cd617e240f7: Status 404 returned error can't find the container with id 2adc8d45ffca37f5a4eb6d962639231e437f9944240ef2aafc818cd617e240f7 Jan 21 13:24:58 crc kubenswrapper[4765]: I0121 13:24:58.085151 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4b3f4bb-115d-4343-9ab1-a693826c14d6","Type":"ContainerStarted","Data":"2adc8d45ffca37f5a4eb6d962639231e437f9944240ef2aafc818cd617e240f7"} Jan 21 13:24:58 crc kubenswrapper[4765]: I0121 13:24:58.088560 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2ks5c" event={"ID":"3cbd47b6-cd86-4ff3-a374-4863622fefad","Type":"ContainerStarted","Data":"74ca9e66a2fe6aecacb06ccff97df34640d6ec03ef879628b10e0e3937f54f3f"} Jan 21 13:24:58 crc kubenswrapper[4765]: I0121 13:24:58.111340 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-2ks5c" podStartSLOduration=2.243435644 podStartE2EDuration="17.111321048s" podCreationTimestamp="2026-01-21 13:24:41 +0000 UTC" firstStartedPulling="2026-01-21 13:24:42.61965626 +0000 UTC m=+1343.637382082" lastFinishedPulling="2026-01-21 13:24:57.487541664 +0000 UTC m=+1358.505267486" observedRunningTime="2026-01-21 13:24:58.105427716 +0000 UTC m=+1359.123153538" watchObservedRunningTime="2026-01-21 13:24:58.111321048 +0000 UTC m=+1359.129046870" Jan 21 13:24:59 crc kubenswrapper[4765]: I0121 13:24:59.099388 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4b3f4bb-115d-4343-9ab1-a693826c14d6","Type":"ContainerStarted","Data":"ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45"} Jan 21 13:25:00 crc kubenswrapper[4765]: I0121 13:25:00.109899 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4b3f4bb-115d-4343-9ab1-a693826c14d6","Type":"ContainerStarted","Data":"e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7"} Jan 21 13:25:01 crc kubenswrapper[4765]: I0121 13:25:01.122945 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4b3f4bb-115d-4343-9ab1-a693826c14d6","Type":"ContainerStarted","Data":"1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418"} Jan 21 13:25:03 crc kubenswrapper[4765]: I0121 13:25:03.280793 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 21 13:25:03 crc kubenswrapper[4765]: I0121 13:25:03.282696 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:25:03 crc kubenswrapper[4765]: I0121 13:25:03.283855 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"e031dd893b547965535c1708b7e364ac4020188df01de94c0db0612a266dcb98"} pod="openstack/horizon-6558674dbd-lct5s" containerMessage="Container horizon failed startup probe, will be restarted" Jan 21 13:25:03 crc kubenswrapper[4765]: I0121 13:25:03.284006 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" containerID="cri-o://e031dd893b547965535c1708b7e364ac4020188df01de94c0db0612a266dcb98" gracePeriod=30 Jan 21 13:25:03 crc kubenswrapper[4765]: I0121 13:25:03.376268 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c57777f6-gqpgv" podUID="1241b1f0-34c1-401a-b91f-13b72926cc2c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 21 13:25:03 crc kubenswrapper[4765]: I0121 13:25:03.376366 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:25:03 crc kubenswrapper[4765]: I0121 13:25:03.377427 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"78e055d27064852c7be2cdb43ad8f3d3122cb6da672a31461fcc48a6a005bc48"} pod="openstack/horizon-86c57777f6-gqpgv" containerMessage="Container horizon failed startup probe, will be restarted" Jan 21 13:25:03 crc kubenswrapper[4765]: I0121 13:25:03.377475 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-86c57777f6-gqpgv" podUID="1241b1f0-34c1-401a-b91f-13b72926cc2c" containerName="horizon" containerID="cri-o://78e055d27064852c7be2cdb43ad8f3d3122cb6da672a31461fcc48a6a005bc48" gracePeriod=30 Jan 21 13:25:04 crc kubenswrapper[4765]: I0121 13:25:04.153361 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4b3f4bb-115d-4343-9ab1-a693826c14d6","Type":"ContainerStarted","Data":"b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b"} Jan 21 13:25:04 crc kubenswrapper[4765]: I0121 13:25:04.154139 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 13:25:04 crc kubenswrapper[4765]: I0121 13:25:04.182036 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=9.705860986 podStartE2EDuration="15.182019932s" podCreationTimestamp="2026-01-21 13:24:49 +0000 UTC" firstStartedPulling="2026-01-21 13:24:57.62899384 +0000 UTC m=+1358.646719662" lastFinishedPulling="2026-01-21 13:25:03.105152786 +0000 UTC m=+1364.122878608" observedRunningTime="2026-01-21 13:25:04.174260592 +0000 UTC m=+1365.191986414" watchObservedRunningTime="2026-01-21 13:25:04.182019932 +0000 UTC m=+1365.199745754" Jan 21 13:25:07 crc kubenswrapper[4765]: I0121 13:25:07.032908 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:25:07 crc kubenswrapper[4765]: I0121 13:25:07.034666 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="ceilometer-notification-agent" containerID="cri-o://e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7" gracePeriod=30 Jan 21 13:25:07 crc kubenswrapper[4765]: I0121 13:25:07.034664 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="proxy-httpd" containerID="cri-o://b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b" gracePeriod=30 Jan 21 13:25:07 crc kubenswrapper[4765]: I0121 13:25:07.034598 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="ceilometer-central-agent" containerID="cri-o://ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45" gracePeriod=30 Jan 21 13:25:07 crc kubenswrapper[4765]: I0121 13:25:07.034683 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="sg-core" containerID="cri-o://1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418" gracePeriod=30 Jan 21 13:25:07 crc kubenswrapper[4765]: I0121 13:25:07.178835 4765 generic.go:334] "Generic (PLEG): container finished" podID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerID="1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418" exitCode=2 Jan 21 13:25:07 crc kubenswrapper[4765]: I0121 13:25:07.178928 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4b3f4bb-115d-4343-9ab1-a693826c14d6","Type":"ContainerDied","Data":"1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418"} Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.146592 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.210654 4765 generic.go:334] "Generic (PLEG): container finished" podID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerID="b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b" exitCode=0 Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.210686 4765 generic.go:334] "Generic (PLEG): container finished" podID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerID="e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7" exitCode=0 Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.210696 4765 generic.go:334] "Generic (PLEG): container finished" podID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerID="ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45" exitCode=0 Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.210744 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4b3f4bb-115d-4343-9ab1-a693826c14d6","Type":"ContainerDied","Data":"b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b"} Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.210776 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4b3f4bb-115d-4343-9ab1-a693826c14d6","Type":"ContainerDied","Data":"e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7"} Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.210785 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4b3f4bb-115d-4343-9ab1-a693826c14d6","Type":"ContainerDied","Data":"ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45"} Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.210795 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c4b3f4bb-115d-4343-9ab1-a693826c14d6","Type":"ContainerDied","Data":"2adc8d45ffca37f5a4eb6d962639231e437f9944240ef2aafc818cd617e240f7"} Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.210811 4765 scope.go:117] "RemoveContainer" containerID="b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.210958 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.243957 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-config-data\") pod \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.244227 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4b3f4bb-115d-4343-9ab1-a693826c14d6-run-httpd\") pod \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.244358 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-combined-ca-bundle\") pod \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.244488 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4b3f4bb-115d-4343-9ab1-a693826c14d6-log-httpd\") pod \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.244647 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-scripts\") pod \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.244784 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5jb2\" (UniqueName: \"kubernetes.io/projected/c4b3f4bb-115d-4343-9ab1-a693826c14d6-kube-api-access-d5jb2\") pod \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.244871 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-sg-core-conf-yaml\") pod \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\" (UID: \"c4b3f4bb-115d-4343-9ab1-a693826c14d6\") " Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.245365 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4b3f4bb-115d-4343-9ab1-a693826c14d6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c4b3f4bb-115d-4343-9ab1-a693826c14d6" (UID: "c4b3f4bb-115d-4343-9ab1-a693826c14d6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.246109 4765 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4b3f4bb-115d-4343-9ab1-a693826c14d6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.253601 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4b3f4bb-115d-4343-9ab1-a693826c14d6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c4b3f4bb-115d-4343-9ab1-a693826c14d6" (UID: "c4b3f4bb-115d-4343-9ab1-a693826c14d6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.265373 4765 scope.go:117] "RemoveContainer" containerID="1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.268575 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b3f4bb-115d-4343-9ab1-a693826c14d6-kube-api-access-d5jb2" (OuterVolumeSpecName: "kube-api-access-d5jb2") pod "c4b3f4bb-115d-4343-9ab1-a693826c14d6" (UID: "c4b3f4bb-115d-4343-9ab1-a693826c14d6"). InnerVolumeSpecName "kube-api-access-d5jb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.281439 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-scripts" (OuterVolumeSpecName: "scripts") pod "c4b3f4bb-115d-4343-9ab1-a693826c14d6" (UID: "c4b3f4bb-115d-4343-9ab1-a693826c14d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.320244 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c4b3f4bb-115d-4343-9ab1-a693826c14d6" (UID: "c4b3f4bb-115d-4343-9ab1-a693826c14d6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.347970 4765 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c4b3f4bb-115d-4343-9ab1-a693826c14d6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.348008 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.348023 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5jb2\" (UniqueName: \"kubernetes.io/projected/c4b3f4bb-115d-4343-9ab1-a693826c14d6-kube-api-access-d5jb2\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.348037 4765 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.433918 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4b3f4bb-115d-4343-9ab1-a693826c14d6" (UID: "c4b3f4bb-115d-4343-9ab1-a693826c14d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.460475 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.475788 4765 scope.go:117] "RemoveContainer" containerID="e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.511735 4765 scope.go:117] "RemoveContainer" containerID="ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.518524 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-config-data" (OuterVolumeSpecName: "config-data") pod "c4b3f4bb-115d-4343-9ab1-a693826c14d6" (UID: "c4b3f4bb-115d-4343-9ab1-a693826c14d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.541825 4765 scope.go:117] "RemoveContainer" containerID="b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b" Jan 21 13:25:08 crc kubenswrapper[4765]: E0121 13:25:08.542379 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b\": container with ID starting with b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b not found: ID does not exist" containerID="b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.542434 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b"} err="failed to get container status \"b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b\": rpc error: code = NotFound desc = could not find container \"b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b\": container with ID starting with b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b not found: ID does not exist" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.542474 4765 scope.go:117] "RemoveContainer" containerID="1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418" Jan 21 13:25:08 crc kubenswrapper[4765]: E0121 13:25:08.542982 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418\": container with ID starting with 1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418 not found: ID does not exist" containerID="1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.543069 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418"} err="failed to get container status \"1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418\": rpc error: code = NotFound desc = could not find container \"1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418\": container with ID starting with 1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418 not found: ID does not exist" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.543122 4765 scope.go:117] "RemoveContainer" containerID="e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7" Jan 21 13:25:08 crc kubenswrapper[4765]: E0121 13:25:08.547056 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7\": container with ID starting with e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7 not found: ID does not exist" containerID="e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.547112 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7"} err="failed to get container status \"e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7\": rpc error: code = NotFound desc = could not find container \"e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7\": container with ID starting with e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7 not found: ID does not exist" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.547164 4765 scope.go:117] "RemoveContainer" containerID="ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45" Jan 21 13:25:08 crc kubenswrapper[4765]: E0121 13:25:08.547560 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45\": container with ID starting with ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45 not found: ID does not exist" containerID="ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.547593 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45"} err="failed to get container status \"ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45\": rpc error: code = NotFound desc = could not find container \"ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45\": container with ID starting with ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45 not found: ID does not exist" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.547618 4765 scope.go:117] "RemoveContainer" containerID="b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.547918 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b"} err="failed to get container status \"b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b\": rpc error: code = NotFound desc = could not find container \"b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b\": container with ID starting with b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b not found: ID does not exist" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.547973 4765 scope.go:117] "RemoveContainer" containerID="1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.548453 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418"} err="failed to get container status \"1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418\": rpc error: code = NotFound desc = could not find container \"1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418\": container with ID starting with 1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418 not found: ID does not exist" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.548483 4765 scope.go:117] "RemoveContainer" containerID="e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.548753 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7"} err="failed to get container status \"e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7\": rpc error: code = NotFound desc = could not find container \"e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7\": container with ID starting with e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7 not found: ID does not exist" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.548782 4765 scope.go:117] "RemoveContainer" containerID="ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.551760 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45"} err="failed to get container status \"ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45\": rpc error: code = NotFound desc = could not find container \"ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45\": container with ID starting with ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45 not found: ID does not exist" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.551801 4765 scope.go:117] "RemoveContainer" containerID="b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.552118 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b"} err="failed to get container status \"b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b\": rpc error: code = NotFound desc = could not find container \"b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b\": container with ID starting with b1ebac48439108d4a1a76c744102b7df920c5a21305da357ed24c1e508ab869b not found: ID does not exist" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.552147 4765 scope.go:117] "RemoveContainer" containerID="1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.552630 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418"} err="failed to get container status \"1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418\": rpc error: code = NotFound desc = could not find container \"1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418\": container with ID starting with 1698e315e839dbc922ee074752eabe4a0635b60405d38f2f0dd244b6c41bb418 not found: ID does not exist" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.552658 4765 scope.go:117] "RemoveContainer" containerID="e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.553502 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7"} err="failed to get container status \"e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7\": rpc error: code = NotFound desc = could not find container \"e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7\": container with ID starting with e021362289ec5e88e5d31fe9cfe748853d7e6dcb41821543eb86692d64f6bbf7 not found: ID does not exist" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.553531 4765 scope.go:117] "RemoveContainer" containerID="ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.553837 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45"} err="failed to get container status \"ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45\": rpc error: code = NotFound desc = could not find container \"ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45\": container with ID starting with ef21bd926523d860872c6db52144f17de7a98897f90188d0f379d617b7507c45 not found: ID does not exist" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.562437 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b3f4bb-115d-4343-9ab1-a693826c14d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.917851 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.947346 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.962376 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:25:08 crc kubenswrapper[4765]: E0121 13:25:08.962930 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="sg-core" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.962960 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="sg-core" Jan 21 13:25:08 crc kubenswrapper[4765]: E0121 13:25:08.962995 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="proxy-httpd" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.963004 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="proxy-httpd" Jan 21 13:25:08 crc kubenswrapper[4765]: E0121 13:25:08.963015 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="ceilometer-notification-agent" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.963023 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="ceilometer-notification-agent" Jan 21 13:25:08 crc kubenswrapper[4765]: E0121 13:25:08.963047 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="ceilometer-central-agent" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.963056 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="ceilometer-central-agent" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.963334 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="proxy-httpd" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.963365 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="ceilometer-central-agent" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.963374 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="ceilometer-notification-agent" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.963393 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" containerName="sg-core" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.966091 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.970475 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.973598 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 13:25:08 crc kubenswrapper[4765]: I0121 13:25:08.979343 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.083269 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f3546f-0a2b-4529-b685-8674eb662a8b-run-httpd\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.084102 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f3546f-0a2b-4529-b685-8674eb662a8b-log-httpd\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.084255 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-scripts\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.084412 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x28md\" (UniqueName: \"kubernetes.io/projected/d0f3546f-0a2b-4529-b685-8674eb662a8b-kube-api-access-x28md\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.084632 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.085110 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.085362 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-config-data\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.186047 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-config-data\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.186629 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f3546f-0a2b-4529-b685-8674eb662a8b-run-httpd\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.186740 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f3546f-0a2b-4529-b685-8674eb662a8b-log-httpd\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.186815 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-scripts\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.186927 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x28md\" (UniqueName: \"kubernetes.io/projected/d0f3546f-0a2b-4529-b685-8674eb662a8b-kube-api-access-x28md\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.187076 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.187169 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.187704 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f3546f-0a2b-4529-b685-8674eb662a8b-run-httpd\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.187740 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f3546f-0a2b-4529-b685-8674eb662a8b-log-httpd\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.191539 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-scripts\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.191864 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.192646 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.196043 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-config-data\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.208376 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x28md\" (UniqueName: \"kubernetes.io/projected/d0f3546f-0a2b-4529-b685-8674eb662a8b-kube-api-access-x28md\") pod \"ceilometer-0\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.296935 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.629874 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b3f4bb-115d-4343-9ab1-a693826c14d6" path="/var/lib/kubelet/pods/c4b3f4bb-115d-4343-9ab1-a693826c14d6/volumes" Jan 21 13:25:09 crc kubenswrapper[4765]: I0121 13:25:09.818895 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:25:10 crc kubenswrapper[4765]: I0121 13:25:10.232982 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f3546f-0a2b-4529-b685-8674eb662a8b","Type":"ContainerStarted","Data":"b9576de088c2cf26dfa6531bf2219ab75d1d1a81a2d1196ff40eabdc46a5daf6"} Jan 21 13:25:11 crc kubenswrapper[4765]: I0121 13:25:11.244672 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f3546f-0a2b-4529-b685-8674eb662a8b","Type":"ContainerStarted","Data":"29245bc8840badecf93e9f0b20ca87b361bedc52a01b76e4b6eea0818764966d"} Jan 21 13:25:12 crc kubenswrapper[4765]: I0121 13:25:12.258488 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f3546f-0a2b-4529-b685-8674eb662a8b","Type":"ContainerStarted","Data":"59afb391cdacd27b9f7a2666cb850805cb893ba3455da77828c89b984f57d6e4"} Jan 21 13:25:13 crc kubenswrapper[4765]: I0121 13:25:13.269480 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f3546f-0a2b-4529-b685-8674eb662a8b","Type":"ContainerStarted","Data":"b3d2fe8470a06b0e221cc2677ca2f89ef5fd7561ed66b7b1c81e02af50d85a12"} Jan 21 13:25:14 crc kubenswrapper[4765]: I0121 13:25:14.300525 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f3546f-0a2b-4529-b685-8674eb662a8b","Type":"ContainerStarted","Data":"cd8977f9ca18e3b957bce3f63e6542809c46e517bf63610e96d17ec5b0b5ff9e"} Jan 21 13:25:14 crc kubenswrapper[4765]: I0121 13:25:14.302339 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 13:25:14 crc kubenswrapper[4765]: I0121 13:25:14.355092 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.468956992 podStartE2EDuration="6.355073136s" podCreationTimestamp="2026-01-21 13:25:08 +0000 UTC" firstStartedPulling="2026-01-21 13:25:09.823919534 +0000 UTC m=+1370.841645356" lastFinishedPulling="2026-01-21 13:25:13.710035678 +0000 UTC m=+1374.727761500" observedRunningTime="2026-01-21 13:25:14.342588704 +0000 UTC m=+1375.360314526" watchObservedRunningTime="2026-01-21 13:25:14.355073136 +0000 UTC m=+1375.372798958" Jan 21 13:25:17 crc kubenswrapper[4765]: I0121 13:25:17.332374 4765 generic.go:334] "Generic (PLEG): container finished" podID="3cbd47b6-cd86-4ff3-a374-4863622fefad" containerID="74ca9e66a2fe6aecacb06ccff97df34640d6ec03ef879628b10e0e3937f54f3f" exitCode=0 Jan 21 13:25:17 crc kubenswrapper[4765]: I0121 13:25:17.332564 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2ks5c" event={"ID":"3cbd47b6-cd86-4ff3-a374-4863622fefad","Type":"ContainerDied","Data":"74ca9e66a2fe6aecacb06ccff97df34640d6ec03ef879628b10e0e3937f54f3f"} Jan 21 13:25:18 crc kubenswrapper[4765]: I0121 13:25:18.711824 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:25:18 crc kubenswrapper[4765]: I0121 13:25:18.767362 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-combined-ca-bundle\") pod \"3cbd47b6-cd86-4ff3-a374-4863622fefad\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " Jan 21 13:25:18 crc kubenswrapper[4765]: I0121 13:25:18.767416 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdpgw\" (UniqueName: \"kubernetes.io/projected/3cbd47b6-cd86-4ff3-a374-4863622fefad-kube-api-access-wdpgw\") pod \"3cbd47b6-cd86-4ff3-a374-4863622fefad\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " Jan 21 13:25:18 crc kubenswrapper[4765]: I0121 13:25:18.767487 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-config-data\") pod \"3cbd47b6-cd86-4ff3-a374-4863622fefad\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " Jan 21 13:25:18 crc kubenswrapper[4765]: I0121 13:25:18.767523 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-scripts\") pod \"3cbd47b6-cd86-4ff3-a374-4863622fefad\" (UID: \"3cbd47b6-cd86-4ff3-a374-4863622fefad\") " Jan 21 13:25:18 crc kubenswrapper[4765]: I0121 13:25:18.773922 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-scripts" (OuterVolumeSpecName: "scripts") pod "3cbd47b6-cd86-4ff3-a374-4863622fefad" (UID: "3cbd47b6-cd86-4ff3-a374-4863622fefad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:18 crc kubenswrapper[4765]: I0121 13:25:18.777498 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cbd47b6-cd86-4ff3-a374-4863622fefad-kube-api-access-wdpgw" (OuterVolumeSpecName: "kube-api-access-wdpgw") pod "3cbd47b6-cd86-4ff3-a374-4863622fefad" (UID: "3cbd47b6-cd86-4ff3-a374-4863622fefad"). InnerVolumeSpecName "kube-api-access-wdpgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:25:18 crc kubenswrapper[4765]: I0121 13:25:18.815038 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cbd47b6-cd86-4ff3-a374-4863622fefad" (UID: "3cbd47b6-cd86-4ff3-a374-4863622fefad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:18 crc kubenswrapper[4765]: I0121 13:25:18.826630 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-config-data" (OuterVolumeSpecName: "config-data") pod "3cbd47b6-cd86-4ff3-a374-4863622fefad" (UID: "3cbd47b6-cd86-4ff3-a374-4863622fefad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:18 crc kubenswrapper[4765]: I0121 13:25:18.870382 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:18 crc kubenswrapper[4765]: I0121 13:25:18.870424 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdpgw\" (UniqueName: \"kubernetes.io/projected/3cbd47b6-cd86-4ff3-a374-4863622fefad-kube-api-access-wdpgw\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:18 crc kubenswrapper[4765]: I0121 13:25:18.870443 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:18 crc kubenswrapper[4765]: I0121 13:25:18.870453 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cbd47b6-cd86-4ff3-a374-4863622fefad-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:19 crc kubenswrapper[4765]: I0121 13:25:19.365601 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2ks5c" event={"ID":"3cbd47b6-cd86-4ff3-a374-4863622fefad","Type":"ContainerDied","Data":"2a8b86309b5a97f415e2947562e19ef948a3c1c78a607bf2d7fe2ac9d767d7d9"} Jan 21 13:25:19 crc kubenswrapper[4765]: I0121 13:25:19.365646 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a8b86309b5a97f415e2947562e19ef948a3c1c78a607bf2d7fe2ac9d767d7d9" Jan 21 13:25:19 crc kubenswrapper[4765]: I0121 13:25:19.365711 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2ks5c" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.408267 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 13:25:20 crc kubenswrapper[4765]: E0121 13:25:20.409129 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cbd47b6-cd86-4ff3-a374-4863622fefad" containerName="nova-cell0-conductor-db-sync" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.409148 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cbd47b6-cd86-4ff3-a374-4863622fefad" containerName="nova-cell0-conductor-db-sync" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.409411 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cbd47b6-cd86-4ff3-a374-4863622fefad" containerName="nova-cell0-conductor-db-sync" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.410238 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.414869 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.414999 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zlzpq" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.422460 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.601319 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79930bf0-36ee-4f2e-8530-0bcdf3c9d998-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"79930bf0-36ee-4f2e-8530-0bcdf3c9d998\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.601411 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p2q5\" (UniqueName: \"kubernetes.io/projected/79930bf0-36ee-4f2e-8530-0bcdf3c9d998-kube-api-access-9p2q5\") pod \"nova-cell0-conductor-0\" (UID: \"79930bf0-36ee-4f2e-8530-0bcdf3c9d998\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.601466 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79930bf0-36ee-4f2e-8530-0bcdf3c9d998-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"79930bf0-36ee-4f2e-8530-0bcdf3c9d998\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.703400 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79930bf0-36ee-4f2e-8530-0bcdf3c9d998-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"79930bf0-36ee-4f2e-8530-0bcdf3c9d998\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.703500 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p2q5\" (UniqueName: \"kubernetes.io/projected/79930bf0-36ee-4f2e-8530-0bcdf3c9d998-kube-api-access-9p2q5\") pod \"nova-cell0-conductor-0\" (UID: \"79930bf0-36ee-4f2e-8530-0bcdf3c9d998\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.703563 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79930bf0-36ee-4f2e-8530-0bcdf3c9d998-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"79930bf0-36ee-4f2e-8530-0bcdf3c9d998\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.721425 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79930bf0-36ee-4f2e-8530-0bcdf3c9d998-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"79930bf0-36ee-4f2e-8530-0bcdf3c9d998\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.727800 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79930bf0-36ee-4f2e-8530-0bcdf3c9d998-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"79930bf0-36ee-4f2e-8530-0bcdf3c9d998\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.731553 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p2q5\" (UniqueName: \"kubernetes.io/projected/79930bf0-36ee-4f2e-8530-0bcdf3c9d998-kube-api-access-9p2q5\") pod \"nova-cell0-conductor-0\" (UID: \"79930bf0-36ee-4f2e-8530-0bcdf3c9d998\") " pod="openstack/nova-cell0-conductor-0" Jan 21 13:25:20 crc kubenswrapper[4765]: I0121 13:25:20.750613 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 13:25:21 crc kubenswrapper[4765]: I0121 13:25:21.274416 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 13:25:21 crc kubenswrapper[4765]: I0121 13:25:21.385372 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"79930bf0-36ee-4f2e-8530-0bcdf3c9d998","Type":"ContainerStarted","Data":"f383499a19625631e487e5d074974a5b553a6fc55bb607d52f73df2ee642dc31"} Jan 21 13:25:22 crc kubenswrapper[4765]: I0121 13:25:22.394065 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"79930bf0-36ee-4f2e-8530-0bcdf3c9d998","Type":"ContainerStarted","Data":"7664f5c3a309b8528573b3433a49f9b48190c62b3f35cc01194b547684ad36f6"} Jan 21 13:25:22 crc kubenswrapper[4765]: I0121 13:25:22.395757 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 13:25:22 crc kubenswrapper[4765]: I0121 13:25:22.425038 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.425014544 podStartE2EDuration="2.425014544s" podCreationTimestamp="2026-01-21 13:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:25:22.420705593 +0000 UTC m=+1383.438431425" watchObservedRunningTime="2026-01-21 13:25:22.425014544 +0000 UTC m=+1383.442740366" Jan 21 13:25:30 crc kubenswrapper[4765]: I0121 13:25:30.777866 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.259730 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-j6x8m"] Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.261450 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.268092 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.268092 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.279984 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-j6x8m"] Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.481195 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-config-data\") pod \"nova-cell0-cell-mapping-j6x8m\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.481256 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jzs8\" (UniqueName: \"kubernetes.io/projected/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-kube-api-access-2jzs8\") pod \"nova-cell0-cell-mapping-j6x8m\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.481375 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-j6x8m\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.481427 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-scripts\") pod \"nova-cell0-cell-mapping-j6x8m\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.570298 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.572382 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.577034 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.585561 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-j6x8m\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.585829 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " pod="openstack/nova-api-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.585932 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tv9h\" (UniqueName: \"kubernetes.io/projected/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-kube-api-access-6tv9h\") pod \"nova-api-0\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " pod="openstack/nova-api-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.586043 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-scripts\") pod \"nova-cell0-cell-mapping-j6x8m\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.586133 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-logs\") pod \"nova-api-0\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " pod="openstack/nova-api-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.586237 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-config-data\") pod \"nova-cell0-cell-mapping-j6x8m\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.586305 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jzs8\" (UniqueName: \"kubernetes.io/projected/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-kube-api-access-2jzs8\") pod \"nova-cell0-cell-mapping-j6x8m\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.586384 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-config-data\") pod \"nova-api-0\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " pod="openstack/nova-api-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.595557 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-scripts\") pod \"nova-cell0-cell-mapping-j6x8m\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.598310 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-config-data\") pod \"nova-cell0-cell-mapping-j6x8m\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.608481 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.614370 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-j6x8m\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.663070 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.664463 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.671269 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.682418 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.701126 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jzs8\" (UniqueName: \"kubernetes.io/projected/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-kube-api-access-2jzs8\") pod \"nova-cell0-cell-mapping-j6x8m\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.701877 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89aaf065-f456-4efe-bfdd-dafb090e4149-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"89aaf065-f456-4efe-bfdd-dafb090e4149\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.702055 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twps2\" (UniqueName: \"kubernetes.io/projected/89aaf065-f456-4efe-bfdd-dafb090e4149-kube-api-access-twps2\") pod \"nova-cell1-novncproxy-0\" (UID: \"89aaf065-f456-4efe-bfdd-dafb090e4149\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.702310 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " pod="openstack/nova-api-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.702386 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tv9h\" (UniqueName: \"kubernetes.io/projected/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-kube-api-access-6tv9h\") pod \"nova-api-0\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " pod="openstack/nova-api-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.702484 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89aaf065-f456-4efe-bfdd-dafb090e4149-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"89aaf065-f456-4efe-bfdd-dafb090e4149\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.702517 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-logs\") pod \"nova-api-0\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " pod="openstack/nova-api-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.702569 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-config-data\") pod \"nova-api-0\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " pod="openstack/nova-api-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.709319 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-logs\") pod \"nova-api-0\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " pod="openstack/nova-api-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.717074 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-config-data\") pod \"nova-api-0\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " pod="openstack/nova-api-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.717855 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " pod="openstack/nova-api-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.772247 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tv9h\" (UniqueName: \"kubernetes.io/projected/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-kube-api-access-6tv9h\") pod \"nova-api-0\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " pod="openstack/nova-api-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.778414 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.787831 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.818273 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.819642 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89aaf065-f456-4efe-bfdd-dafb090e4149-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"89aaf065-f456-4efe-bfdd-dafb090e4149\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.819699 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d97d96-4a2c-4db2-bacc-acda550ebd59-config-data\") pod \"nova-metadata-0\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " pod="openstack/nova-metadata-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.819736 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twps2\" (UniqueName: \"kubernetes.io/projected/89aaf065-f456-4efe-bfdd-dafb090e4149-kube-api-access-twps2\") pod \"nova-cell1-novncproxy-0\" (UID: \"89aaf065-f456-4efe-bfdd-dafb090e4149\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.819783 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9njrc\" (UniqueName: \"kubernetes.io/projected/24d97d96-4a2c-4db2-bacc-acda550ebd59-kube-api-access-9njrc\") pod \"nova-metadata-0\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " pod="openstack/nova-metadata-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.819815 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d97d96-4a2c-4db2-bacc-acda550ebd59-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " pod="openstack/nova-metadata-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.819872 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89aaf065-f456-4efe-bfdd-dafb090e4149-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"89aaf065-f456-4efe-bfdd-dafb090e4149\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.819897 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d97d96-4a2c-4db2-bacc-acda550ebd59-logs\") pod \"nova-metadata-0\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " pod="openstack/nova-metadata-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.839627 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.840978 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.847952 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89aaf065-f456-4efe-bfdd-dafb090e4149-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"89aaf065-f456-4efe-bfdd-dafb090e4149\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.852638 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89aaf065-f456-4efe-bfdd-dafb090e4149-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"89aaf065-f456-4efe-bfdd-dafb090e4149\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.853705 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.873047 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.874196 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twps2\" (UniqueName: \"kubernetes.io/projected/89aaf065-f456-4efe-bfdd-dafb090e4149-kube-api-access-twps2\") pod \"nova-cell1-novncproxy-0\" (UID: \"89aaf065-f456-4efe-bfdd-dafb090e4149\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.891571 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.948166 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-config-data\") pod \"nova-scheduler-0\" (UID: \"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.948323 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d97d96-4a2c-4db2-bacc-acda550ebd59-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " pod="openstack/nova-metadata-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.948420 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvhqd\" (UniqueName: \"kubernetes.io/projected/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-kube-api-access-dvhqd\") pod \"nova-scheduler-0\" (UID: \"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.948512 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.948844 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d97d96-4a2c-4db2-bacc-acda550ebd59-logs\") pod \"nova-metadata-0\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " pod="openstack/nova-metadata-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.949129 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d97d96-4a2c-4db2-bacc-acda550ebd59-config-data\") pod \"nova-metadata-0\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " pod="openstack/nova-metadata-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.949368 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9njrc\" (UniqueName: \"kubernetes.io/projected/24d97d96-4a2c-4db2-bacc-acda550ebd59-kube-api-access-9njrc\") pod \"nova-metadata-0\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " pod="openstack/nova-metadata-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.950872 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d97d96-4a2c-4db2-bacc-acda550ebd59-logs\") pod \"nova-metadata-0\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " pod="openstack/nova-metadata-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.960994 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:25:31 crc kubenswrapper[4765]: I0121 13:25:31.982714 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d97d96-4a2c-4db2-bacc-acda550ebd59-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " pod="openstack/nova-metadata-0" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.013777 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.017297 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d97d96-4a2c-4db2-bacc-acda550ebd59-config-data\") pod \"nova-metadata-0\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " pod="openstack/nova-metadata-0" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.028549 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9njrc\" (UniqueName: \"kubernetes.io/projected/24d97d96-4a2c-4db2-bacc-acda550ebd59-kube-api-access-9njrc\") pod \"nova-metadata-0\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " pod="openstack/nova-metadata-0" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.037337 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.053566 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-config-data\") pod \"nova-scheduler-0\" (UID: \"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.053629 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvhqd\" (UniqueName: \"kubernetes.io/projected/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-kube-api-access-dvhqd\") pod \"nova-scheduler-0\" (UID: \"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.053739 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.059018 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.066536 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-config-data\") pod \"nova-scheduler-0\" (UID: \"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.082291 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xq8nf"] Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.083941 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.092972 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvhqd\" (UniqueName: \"kubernetes.io/projected/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-kube-api-access-dvhqd\") pod \"nova-scheduler-0\" (UID: \"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.094679 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xq8nf"] Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.158663 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lxqh\" (UniqueName: \"kubernetes.io/projected/995d0c57-db1c-4e45-a405-cc87dc9094da-kube-api-access-6lxqh\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.158777 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.158834 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-dns-svc\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.158912 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.158962 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.159070 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-config\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.265155 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lxqh\" (UniqueName: \"kubernetes.io/projected/995d0c57-db1c-4e45-a405-cc87dc9094da-kube-api-access-6lxqh\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.265325 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.265401 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-dns-svc\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.265449 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.265481 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.265597 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-config\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.266629 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-config\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.267354 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.267860 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.268271 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-dns-svc\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.272199 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.299012 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lxqh\" (UniqueName: \"kubernetes.io/projected/995d0c57-db1c-4e45-a405-cc87dc9094da-kube-api-access-6lxqh\") pod \"dnsmasq-dns-bccf8f775-xq8nf\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.324392 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.332247 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.450022 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:32 crc kubenswrapper[4765]: W0121 13:25:32.756478 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89aaf065_f456_4efe_bfdd_dafb090e4149.slice/crio-dbcc81ebf496f1347d9ae588363e9f163c7fd1299bfaea87ab007ca9eb5de01f WatchSource:0}: Error finding container dbcc81ebf496f1347d9ae588363e9f163c7fd1299bfaea87ab007ca9eb5de01f: Status 404 returned error can't find the container with id dbcc81ebf496f1347d9ae588363e9f163c7fd1299bfaea87ab007ca9eb5de01f Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.772509 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:25:32 crc kubenswrapper[4765]: W0121 13:25:32.780584 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dc89bc6_a242_4876_bf76_4d93cbc8d55d.slice/crio-c218acdbb548c3b34c4a0099610289d4ef01a5a0f2e5cf8e4e19becbfa7b2c7b WatchSource:0}: Error finding container c218acdbb548c3b34c4a0099610289d4ef01a5a0f2e5cf8e4e19becbfa7b2c7b: Status 404 returned error can't find the container with id c218acdbb548c3b34c4a0099610289d4ef01a5a0f2e5cf8e4e19becbfa7b2c7b Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.802852 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-j6x8m"] Jan 21 13:25:32 crc kubenswrapper[4765]: I0121 13:25:32.852473 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.059995 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.222283 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:25:33 crc kubenswrapper[4765]: W0121 13:25:33.223586 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdd8ae5d_5ba6_44b2_9af7_a46fe7515dc9.slice/crio-55f6238b77e0e1d9faec2c48c58343f205bf3e745176687703ae0214c86f4a7d WatchSource:0}: Error finding container 55f6238b77e0e1d9faec2c48c58343f205bf3e745176687703ae0214c86f4a7d: Status 404 returned error can't find the container with id 55f6238b77e0e1d9faec2c48c58343f205bf3e745176687703ae0214c86f4a7d Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.298312 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xq8nf"] Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.429628 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fd76t"] Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.431665 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.437382 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.437753 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.457032 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fd76t"] Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.500541 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-scripts\") pod \"nova-cell1-conductor-db-sync-fd76t\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.500642 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brs7b\" (UniqueName: \"kubernetes.io/projected/ded68b6f-882a-4df2-afc6-760c969f9724-kube-api-access-brs7b\") pod \"nova-cell1-conductor-db-sync-fd76t\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.500691 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fd76t\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.500711 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-config-data\") pod \"nova-cell1-conductor-db-sync-fd76t\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.555403 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j6x8m" event={"ID":"2dc89bc6-a242-4876-bf76-4d93cbc8d55d","Type":"ContainerStarted","Data":"174f571d566dc44d071eb42dcfca229e44df5df09a299b3017dce551d67830a7"} Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.555473 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j6x8m" event={"ID":"2dc89bc6-a242-4876-bf76-4d93cbc8d55d","Type":"ContainerStarted","Data":"c218acdbb548c3b34c4a0099610289d4ef01a5a0f2e5cf8e4e19becbfa7b2c7b"} Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.584692 4765 generic.go:334] "Generic (PLEG): container finished" podID="074ae613-bc7f-4443-abdb-7010b6054997" containerID="e031dd893b547965535c1708b7e364ac4020188df01de94c0db0612a266dcb98" exitCode=137 Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.584898 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6558674dbd-lct5s" event={"ID":"074ae613-bc7f-4443-abdb-7010b6054997","Type":"ContainerDied","Data":"e031dd893b547965535c1708b7e364ac4020188df01de94c0db0612a266dcb98"} Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.584971 4765 scope.go:117] "RemoveContainer" containerID="aa436e74a6fd1c1c3a4ed7348015c8f931d8a51210c3f7b94c4c01885524ce52" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.585263 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-j6x8m" podStartSLOduration=2.58525379 podStartE2EDuration="2.58525379s" podCreationTimestamp="2026-01-21 13:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:25:33.583803452 +0000 UTC m=+1394.601529274" watchObservedRunningTime="2026-01-21 13:25:33.58525379 +0000 UTC m=+1394.602979612" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.594139 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"89aaf065-f456-4efe-bfdd-dafb090e4149","Type":"ContainerStarted","Data":"dbcc81ebf496f1347d9ae588363e9f163c7fd1299bfaea87ab007ca9eb5de01f"} Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.597816 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24d97d96-4a2c-4db2-bacc-acda550ebd59","Type":"ContainerStarted","Data":"12d245003fa9070dd5dcffb000250e01ea3df141df1b0dde517754a251c4c929"} Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.602663 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brs7b\" (UniqueName: \"kubernetes.io/projected/ded68b6f-882a-4df2-afc6-760c969f9724-kube-api-access-brs7b\") pod \"nova-cell1-conductor-db-sync-fd76t\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.602757 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fd76t\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.602787 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-config-data\") pod \"nova-cell1-conductor-db-sync-fd76t\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.602951 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-scripts\") pod \"nova-cell1-conductor-db-sync-fd76t\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.610953 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fd76t\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.611527 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9","Type":"ContainerStarted","Data":"55f6238b77e0e1d9faec2c48c58343f205bf3e745176687703ae0214c86f4a7d"} Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.630063 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brs7b\" (UniqueName: \"kubernetes.io/projected/ded68b6f-882a-4df2-afc6-760c969f9724-kube-api-access-brs7b\") pod \"nova-cell1-conductor-db-sync-fd76t\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.630369 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-config-data\") pod \"nova-cell1-conductor-db-sync-fd76t\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.630785 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-scripts\") pod \"nova-cell1-conductor-db-sync-fd76t\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.638615 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfbf6137-ef01-4260-8dca-c82ce4c55dd7","Type":"ContainerStarted","Data":"9ccafa3e58279f85da2abe0b4bbc612c99aef17c58b64183a1d6d681593446e5"} Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.638670 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" event={"ID":"995d0c57-db1c-4e45-a405-cc87dc9094da","Type":"ContainerStarted","Data":"d5e003f9c21d4db8bb982e6fe32752f53ca3b782854083675c9a878af502b529"} Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.798137 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:33 crc kubenswrapper[4765]: I0121 13:25:33.810807 4765 scope.go:117] "RemoveContainer" containerID="525e7c833be0c9aaecaa3bb143bb8b3e85d3f4f3f3a988497d9a304c34f0453f" Jan 21 13:25:34 crc kubenswrapper[4765]: I0121 13:25:34.415290 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fd76t"] Jan 21 13:25:34 crc kubenswrapper[4765]: I0121 13:25:34.690113 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fd76t" event={"ID":"ded68b6f-882a-4df2-afc6-760c969f9724","Type":"ContainerStarted","Data":"ab75d0dec5db883d4af76b6942e6dcfa178103ddf1259045c5d94a959d8fca9d"} Jan 21 13:25:34 crc kubenswrapper[4765]: I0121 13:25:34.720720 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6558674dbd-lct5s" event={"ID":"074ae613-bc7f-4443-abdb-7010b6054997","Type":"ContainerStarted","Data":"689cae05dcc0e9b9b0adda9d542e5c2b2db33884367706410ddf5bee650aba60"} Jan 21 13:25:34 crc kubenswrapper[4765]: I0121 13:25:34.740855 4765 generic.go:334] "Generic (PLEG): container finished" podID="1241b1f0-34c1-401a-b91f-13b72926cc2c" containerID="78e055d27064852c7be2cdb43ad8f3d3122cb6da672a31461fcc48a6a005bc48" exitCode=137 Jan 21 13:25:34 crc kubenswrapper[4765]: I0121 13:25:34.740928 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c57777f6-gqpgv" event={"ID":"1241b1f0-34c1-401a-b91f-13b72926cc2c","Type":"ContainerDied","Data":"78e055d27064852c7be2cdb43ad8f3d3122cb6da672a31461fcc48a6a005bc48"} Jan 21 13:25:34 crc kubenswrapper[4765]: I0121 13:25:34.740957 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86c57777f6-gqpgv" event={"ID":"1241b1f0-34c1-401a-b91f-13b72926cc2c","Type":"ContainerStarted","Data":"d94a705493ebcac3a403271d9f366e7fa1fef088fab6c7fa6516a5936fe8fa31"} Jan 21 13:25:34 crc kubenswrapper[4765]: I0121 13:25:34.740972 4765 scope.go:117] "RemoveContainer" containerID="46f1a7c9396eca5402ea7a2319db77d5ead07a4127c2f33dffbb8adc136e01da" Jan 21 13:25:34 crc kubenswrapper[4765]: I0121 13:25:34.756349 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" event={"ID":"995d0c57-db1c-4e45-a405-cc87dc9094da","Type":"ContainerStarted","Data":"3b7f5dbfda35ace929f33bdd1d747c5fa2b7ad7d040800f3e83eb6a42844237e"} Jan 21 13:25:35 crc kubenswrapper[4765]: I0121 13:25:35.788989 4765 generic.go:334] "Generic (PLEG): container finished" podID="995d0c57-db1c-4e45-a405-cc87dc9094da" containerID="3b7f5dbfda35ace929f33bdd1d747c5fa2b7ad7d040800f3e83eb6a42844237e" exitCode=0 Jan 21 13:25:35 crc kubenswrapper[4765]: I0121 13:25:35.790109 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" event={"ID":"995d0c57-db1c-4e45-a405-cc87dc9094da","Type":"ContainerDied","Data":"3b7f5dbfda35ace929f33bdd1d747c5fa2b7ad7d040800f3e83eb6a42844237e"} Jan 21 13:25:35 crc kubenswrapper[4765]: I0121 13:25:35.803789 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fd76t" event={"ID":"ded68b6f-882a-4df2-afc6-760c969f9724","Type":"ContainerStarted","Data":"16f04dfe7f499fa61852a48d4680778d3746576e8f5979308d34b4da1b26aaa8"} Jan 21 13:25:35 crc kubenswrapper[4765]: I0121 13:25:35.972271 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:25:36 crc kubenswrapper[4765]: I0121 13:25:36.045809 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:25:36 crc kubenswrapper[4765]: I0121 13:25:36.836090 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-fd76t" podStartSLOduration=3.8360703579999997 podStartE2EDuration="3.836070358s" podCreationTimestamp="2026-01-21 13:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:25:36.83187761 +0000 UTC m=+1397.849603432" watchObservedRunningTime="2026-01-21 13:25:36.836070358 +0000 UTC m=+1397.853796180" Jan 21 13:25:38 crc kubenswrapper[4765]: I0121 13:25:38.949376 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" event={"ID":"995d0c57-db1c-4e45-a405-cc87dc9094da","Type":"ContainerStarted","Data":"e26f5bd0d8964cadbb8bec920a918029ef026d03a754d329dc29b502f5d6b326"} Jan 21 13:25:38 crc kubenswrapper[4765]: I0121 13:25:38.949785 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:38 crc kubenswrapper[4765]: I0121 13:25:38.994798 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" podStartSLOduration=7.994779107 podStartE2EDuration="7.994779107s" podCreationTimestamp="2026-01-21 13:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:25:38.98830906 +0000 UTC m=+1400.006034872" watchObservedRunningTime="2026-01-21 13:25:38.994779107 +0000 UTC m=+1400.012504929" Jan 21 13:25:39 crc kubenswrapper[4765]: I0121 13:25:39.307529 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 13:25:42 crc kubenswrapper[4765]: I0121 13:25:42.148175 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"89aaf065-f456-4efe-bfdd-dafb090e4149","Type":"ContainerStarted","Data":"960a0127400eb29256b1bcfc90b651f706ca0d9b4c1325eefbe661afa86aca1a"} Jan 21 13:25:42 crc kubenswrapper[4765]: I0121 13:25:42.148483 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="89aaf065-f456-4efe-bfdd-dafb090e4149" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://960a0127400eb29256b1bcfc90b651f706ca0d9b4c1325eefbe661afa86aca1a" gracePeriod=30 Jan 21 13:25:42 crc kubenswrapper[4765]: I0121 13:25:42.164218 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9","Type":"ContainerStarted","Data":"565fef1513865cea70e7ad9897596a6ebd3cc0a19b12dc4a090961d2dd88de3a"} Jan 21 13:25:42 crc kubenswrapper[4765]: I0121 13:25:42.180498 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.237656894 podStartE2EDuration="11.180482865s" podCreationTimestamp="2026-01-21 13:25:31 +0000 UTC" firstStartedPulling="2026-01-21 13:25:32.76256672 +0000 UTC m=+1393.780292542" lastFinishedPulling="2026-01-21 13:25:41.705392691 +0000 UTC m=+1402.723118513" observedRunningTime="2026-01-21 13:25:42.179480729 +0000 UTC m=+1403.197206571" watchObservedRunningTime="2026-01-21 13:25:42.180482865 +0000 UTC m=+1403.198208687" Jan 21 13:25:42 crc kubenswrapper[4765]: I0121 13:25:42.186651 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfbf6137-ef01-4260-8dca-c82ce4c55dd7","Type":"ContainerStarted","Data":"9ec58256798dac0bb5daceca61fb534749ea65bae0ebcbb450ebfb4dc7a817e6"} Jan 21 13:25:42 crc kubenswrapper[4765]: I0121 13:25:42.221406 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.754804003 podStartE2EDuration="11.2213823s" podCreationTimestamp="2026-01-21 13:25:31 +0000 UTC" firstStartedPulling="2026-01-21 13:25:33.236409312 +0000 UTC m=+1394.254135134" lastFinishedPulling="2026-01-21 13:25:41.702987609 +0000 UTC m=+1402.720713431" observedRunningTime="2026-01-21 13:25:42.208607331 +0000 UTC m=+1403.226333153" watchObservedRunningTime="2026-01-21 13:25:42.2213823 +0000 UTC m=+1403.239108122" Jan 21 13:25:42 crc kubenswrapper[4765]: I0121 13:25:42.333498 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 13:25:42 crc kubenswrapper[4765]: I0121 13:25:42.333551 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 13:25:42 crc kubenswrapper[4765]: I0121 13:25:42.567939 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-scheduler-0" podUID="fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9" containerName="nova-scheduler-scheduler" probeResult="failure" output="" Jan 21 13:25:43 crc kubenswrapper[4765]: I0121 13:25:43.197663 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfbf6137-ef01-4260-8dca-c82ce4c55dd7","Type":"ContainerStarted","Data":"fbb45893fb5b25629cb09ce0ec8268953af9a14afb370d7090b818b5a84cc958"} Jan 21 13:25:43 crc kubenswrapper[4765]: I0121 13:25:43.200813 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="24d97d96-4a2c-4db2-bacc-acda550ebd59" containerName="nova-metadata-log" containerID="cri-o://e07d68b0c1975054de581cd2058496f98b2353516880411ab1e1ab7ed36efc1f" gracePeriod=30 Jan 21 13:25:43 crc kubenswrapper[4765]: I0121 13:25:43.200891 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24d97d96-4a2c-4db2-bacc-acda550ebd59","Type":"ContainerStarted","Data":"5873e0031b1cf0cc4a84ccc79925b44360fc77e98949a2aa77572058380fd452"} Jan 21 13:25:43 crc kubenswrapper[4765]: I0121 13:25:43.200913 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24d97d96-4a2c-4db2-bacc-acda550ebd59","Type":"ContainerStarted","Data":"e07d68b0c1975054de581cd2058496f98b2353516880411ab1e1ab7ed36efc1f"} Jan 21 13:25:43 crc kubenswrapper[4765]: I0121 13:25:43.200965 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="24d97d96-4a2c-4db2-bacc-acda550ebd59" containerName="nova-metadata-metadata" containerID="cri-o://5873e0031b1cf0cc4a84ccc79925b44360fc77e98949a2aa77572058380fd452" gracePeriod=30 Jan 21 13:25:43 crc kubenswrapper[4765]: I0121 13:25:43.225097 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.347491711 podStartE2EDuration="12.225077589s" podCreationTimestamp="2026-01-21 13:25:31 +0000 UTC" firstStartedPulling="2026-01-21 13:25:32.825478083 +0000 UTC m=+1393.843203905" lastFinishedPulling="2026-01-21 13:25:41.703063961 +0000 UTC m=+1402.720789783" observedRunningTime="2026-01-21 13:25:43.219633558 +0000 UTC m=+1404.237359400" watchObservedRunningTime="2026-01-21 13:25:43.225077589 +0000 UTC m=+1404.242803411" Jan 21 13:25:43 crc kubenswrapper[4765]: I0121 13:25:43.254477 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.644361608 podStartE2EDuration="12.254454196s" podCreationTimestamp="2026-01-21 13:25:31 +0000 UTC" firstStartedPulling="2026-01-21 13:25:33.102410236 +0000 UTC m=+1394.120136058" lastFinishedPulling="2026-01-21 13:25:41.712502824 +0000 UTC m=+1402.730228646" observedRunningTime="2026-01-21 13:25:43.246615194 +0000 UTC m=+1404.264341016" watchObservedRunningTime="2026-01-21 13:25:43.254454196 +0000 UTC m=+1404.272180018" Jan 21 13:25:43 crc kubenswrapper[4765]: I0121 13:25:43.279964 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:25:43 crc kubenswrapper[4765]: I0121 13:25:43.280021 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:25:43 crc kubenswrapper[4765]: I0121 13:25:43.376522 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:25:43 crc kubenswrapper[4765]: I0121 13:25:43.376786 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:25:44 crc kubenswrapper[4765]: I0121 13:25:44.210588 4765 generic.go:334] "Generic (PLEG): container finished" podID="24d97d96-4a2c-4db2-bacc-acda550ebd59" containerID="5873e0031b1cf0cc4a84ccc79925b44360fc77e98949a2aa77572058380fd452" exitCode=0 Jan 21 13:25:44 crc kubenswrapper[4765]: I0121 13:25:44.211480 4765 generic.go:334] "Generic (PLEG): container finished" podID="24d97d96-4a2c-4db2-bacc-acda550ebd59" containerID="e07d68b0c1975054de581cd2058496f98b2353516880411ab1e1ab7ed36efc1f" exitCode=143 Jan 21 13:25:44 crc kubenswrapper[4765]: I0121 13:25:44.210737 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24d97d96-4a2c-4db2-bacc-acda550ebd59","Type":"ContainerDied","Data":"5873e0031b1cf0cc4a84ccc79925b44360fc77e98949a2aa77572058380fd452"} Jan 21 13:25:44 crc kubenswrapper[4765]: I0121 13:25:44.211633 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24d97d96-4a2c-4db2-bacc-acda550ebd59","Type":"ContainerDied","Data":"e07d68b0c1975054de581cd2058496f98b2353516880411ab1e1ab7ed36efc1f"} Jan 21 13:25:44 crc kubenswrapper[4765]: I0121 13:25:44.446057 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:25:44 crc kubenswrapper[4765]: I0121 13:25:44.446377 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:25:44 crc kubenswrapper[4765]: I0121 13:25:44.904736 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.058365 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9njrc\" (UniqueName: \"kubernetes.io/projected/24d97d96-4a2c-4db2-bacc-acda550ebd59-kube-api-access-9njrc\") pod \"24d97d96-4a2c-4db2-bacc-acda550ebd59\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.058433 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d97d96-4a2c-4db2-bacc-acda550ebd59-config-data\") pod \"24d97d96-4a2c-4db2-bacc-acda550ebd59\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.058596 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d97d96-4a2c-4db2-bacc-acda550ebd59-logs\") pod \"24d97d96-4a2c-4db2-bacc-acda550ebd59\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.058648 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d97d96-4a2c-4db2-bacc-acda550ebd59-combined-ca-bundle\") pod \"24d97d96-4a2c-4db2-bacc-acda550ebd59\" (UID: \"24d97d96-4a2c-4db2-bacc-acda550ebd59\") " Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.059143 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d97d96-4a2c-4db2-bacc-acda550ebd59-logs" (OuterVolumeSpecName: "logs") pod "24d97d96-4a2c-4db2-bacc-acda550ebd59" (UID: "24d97d96-4a2c-4db2-bacc-acda550ebd59"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.090528 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d97d96-4a2c-4db2-bacc-acda550ebd59-kube-api-access-9njrc" (OuterVolumeSpecName: "kube-api-access-9njrc") pod "24d97d96-4a2c-4db2-bacc-acda550ebd59" (UID: "24d97d96-4a2c-4db2-bacc-acda550ebd59"). InnerVolumeSpecName "kube-api-access-9njrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.118358 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d97d96-4a2c-4db2-bacc-acda550ebd59-config-data" (OuterVolumeSpecName: "config-data") pod "24d97d96-4a2c-4db2-bacc-acda550ebd59" (UID: "24d97d96-4a2c-4db2-bacc-acda550ebd59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.162590 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d97d96-4a2c-4db2-bacc-acda550ebd59-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.162623 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9njrc\" (UniqueName: \"kubernetes.io/projected/24d97d96-4a2c-4db2-bacc-acda550ebd59-kube-api-access-9njrc\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.162680 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d97d96-4a2c-4db2-bacc-acda550ebd59-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.196391 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d97d96-4a2c-4db2-bacc-acda550ebd59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24d97d96-4a2c-4db2-bacc-acda550ebd59" (UID: "24d97d96-4a2c-4db2-bacc-acda550ebd59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.231280 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"24d97d96-4a2c-4db2-bacc-acda550ebd59","Type":"ContainerDied","Data":"12d245003fa9070dd5dcffb000250e01ea3df141df1b0dde517754a251c4c929"} Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.231336 4765 scope.go:117] "RemoveContainer" containerID="5873e0031b1cf0cc4a84ccc79925b44360fc77e98949a2aa77572058380fd452" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.231556 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.264526 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d97d96-4a2c-4db2-bacc-acda550ebd59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.309143 4765 scope.go:117] "RemoveContainer" containerID="e07d68b0c1975054de581cd2058496f98b2353516880411ab1e1ab7ed36efc1f" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.316365 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.357756 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.379369 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:25:45 crc kubenswrapper[4765]: E0121 13:25:45.380267 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d97d96-4a2c-4db2-bacc-acda550ebd59" containerName="nova-metadata-metadata" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.380289 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d97d96-4a2c-4db2-bacc-acda550ebd59" containerName="nova-metadata-metadata" Jan 21 13:25:45 crc kubenswrapper[4765]: E0121 13:25:45.380318 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d97d96-4a2c-4db2-bacc-acda550ebd59" containerName="nova-metadata-log" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.380326 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d97d96-4a2c-4db2-bacc-acda550ebd59" containerName="nova-metadata-log" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.380798 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d97d96-4a2c-4db2-bacc-acda550ebd59" containerName="nova-metadata-metadata" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.380844 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d97d96-4a2c-4db2-bacc-acda550ebd59" containerName="nova-metadata-log" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.382831 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.384833 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.385205 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.401621 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.573395 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.573435 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-248g9\" (UniqueName: \"kubernetes.io/projected/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-kube-api-access-248g9\") pod \"nova-metadata-0\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.573466 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.573539 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-config-data\") pod \"nova-metadata-0\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.573620 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-logs\") pod \"nova-metadata-0\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.652306 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d97d96-4a2c-4db2-bacc-acda550ebd59" path="/var/lib/kubelet/pods/24d97d96-4a2c-4db2-bacc-acda550ebd59/volumes" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.676649 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-logs\") pod \"nova-metadata-0\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.676877 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.677092 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-logs\") pod \"nova-metadata-0\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.678353 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-248g9\" (UniqueName: \"kubernetes.io/projected/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-kube-api-access-248g9\") pod \"nova-metadata-0\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.678442 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.678647 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-config-data\") pod \"nova-metadata-0\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.689512 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-config-data\") pod \"nova-metadata-0\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.689759 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.706528 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.716043 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-248g9\" (UniqueName: \"kubernetes.io/projected/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-kube-api-access-248g9\") pod \"nova-metadata-0\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " pod="openstack/nova-metadata-0" Jan 21 13:25:45 crc kubenswrapper[4765]: I0121 13:25:45.744360 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:25:46 crc kubenswrapper[4765]: I0121 13:25:46.338097 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:25:47 crc kubenswrapper[4765]: I0121 13:25:47.011807 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:25:47 crc kubenswrapper[4765]: I0121 13:25:47.251786 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7c893cff-6e87-4cd6-b439-a77dfefa7c7b","Type":"ContainerStarted","Data":"20c518d9c30899f573e6b9332208ea2530cf103a6ccb95cba1d3face746c976a"} Jan 21 13:25:47 crc kubenswrapper[4765]: I0121 13:25:47.251846 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7c893cff-6e87-4cd6-b439-a77dfefa7c7b","Type":"ContainerStarted","Data":"96add7bd136ef8827cdd052a1922bdd37f00aca32cab90a78b94db5def98ce39"} Jan 21 13:25:47 crc kubenswrapper[4765]: I0121 13:25:47.251857 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7c893cff-6e87-4cd6-b439-a77dfefa7c7b","Type":"ContainerStarted","Data":"08d8c3d1a70bfae1000aaafe1ed23f5d0cb314b497b86d02fb3bee914ae2df0e"} Jan 21 13:25:47 crc kubenswrapper[4765]: I0121 13:25:47.277857 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.2778405729999998 podStartE2EDuration="2.277840573s" podCreationTimestamp="2026-01-21 13:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:25:47.275321458 +0000 UTC m=+1408.293047300" watchObservedRunningTime="2026-01-21 13:25:47.277840573 +0000 UTC m=+1408.295566385" Jan 21 13:25:47 crc kubenswrapper[4765]: I0121 13:25:47.451435 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:25:47 crc kubenswrapper[4765]: I0121 13:25:47.470094 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:25:47 crc kubenswrapper[4765]: I0121 13:25:47.470632 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="12b93916-e6dc-4aac-809e-0dfe1b11ed1a" containerName="kube-state-metrics" containerID="cri-o://a3f1da0a157762cc91be03569cf42b0a44b62a1c1885eec7a53e04727abf2412" gracePeriod=30 Jan 21 13:25:47 crc kubenswrapper[4765]: I0121 13:25:47.567574 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-vcvq5"] Jan 21 13:25:47 crc kubenswrapper[4765]: I0121 13:25:47.567846 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" podUID="0bd7ae01-c989-4e75-bc95-4c39a5fb8670" containerName="dnsmasq-dns" containerID="cri-o://08731cc17f14fe9a4ba2e3add17742e2a164db0b0de19888cd0bc7bc1a7e34c3" gracePeriod=10 Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.184684 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.305280 4765 generic.go:334] "Generic (PLEG): container finished" podID="2dc89bc6-a242-4876-bf76-4d93cbc8d55d" containerID="174f571d566dc44d071eb42dcfca229e44df5df09a299b3017dce551d67830a7" exitCode=0 Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.305397 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j6x8m" event={"ID":"2dc89bc6-a242-4876-bf76-4d93cbc8d55d","Type":"ContainerDied","Data":"174f571d566dc44d071eb42dcfca229e44df5df09a299b3017dce551d67830a7"} Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.306917 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.316605 4765 generic.go:334] "Generic (PLEG): container finished" podID="12b93916-e6dc-4aac-809e-0dfe1b11ed1a" containerID="a3f1da0a157762cc91be03569cf42b0a44b62a1c1885eec7a53e04727abf2412" exitCode=2 Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.316727 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"12b93916-e6dc-4aac-809e-0dfe1b11ed1a","Type":"ContainerDied","Data":"a3f1da0a157762cc91be03569cf42b0a44b62a1c1885eec7a53e04727abf2412"} Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.316736 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.316772 4765 scope.go:117] "RemoveContainer" containerID="a3f1da0a157762cc91be03569cf42b0a44b62a1c1885eec7a53e04727abf2412" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.316759 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"12b93916-e6dc-4aac-809e-0dfe1b11ed1a","Type":"ContainerDied","Data":"d43c66752619e80bbff14c73b01198ef9c5cd208c687a948fc4f71184bf53d53"} Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.342998 4765 generic.go:334] "Generic (PLEG): container finished" podID="0bd7ae01-c989-4e75-bc95-4c39a5fb8670" containerID="08731cc17f14fe9a4ba2e3add17742e2a164db0b0de19888cd0bc7bc1a7e34c3" exitCode=0 Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.343997 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.344168 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" event={"ID":"0bd7ae01-c989-4e75-bc95-4c39a5fb8670","Type":"ContainerDied","Data":"08731cc17f14fe9a4ba2e3add17742e2a164db0b0de19888cd0bc7bc1a7e34c3"} Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.344200 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-vcvq5" event={"ID":"0bd7ae01-c989-4e75-bc95-4c39a5fb8670","Type":"ContainerDied","Data":"e295ad4b73e4ef9f3df6a779ec640aae31e7ab8144437228c60db847506fa294"} Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.358924 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-dns-swift-storage-0\") pod \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.358977 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fz2n\" (UniqueName: \"kubernetes.io/projected/12b93916-e6dc-4aac-809e-0dfe1b11ed1a-kube-api-access-8fz2n\") pod \"12b93916-e6dc-4aac-809e-0dfe1b11ed1a\" (UID: \"12b93916-e6dc-4aac-809e-0dfe1b11ed1a\") " Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.359034 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-config\") pod \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.359053 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-ovsdbserver-sb\") pod \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.359240 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q6hg\" (UniqueName: \"kubernetes.io/projected/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-kube-api-access-9q6hg\") pod \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.359262 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-ovsdbserver-nb\") pod \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.359328 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-dns-svc\") pod \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\" (UID: \"0bd7ae01-c989-4e75-bc95-4c39a5fb8670\") " Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.376957 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-kube-api-access-9q6hg" (OuterVolumeSpecName: "kube-api-access-9q6hg") pod "0bd7ae01-c989-4e75-bc95-4c39a5fb8670" (UID: "0bd7ae01-c989-4e75-bc95-4c39a5fb8670"). InnerVolumeSpecName "kube-api-access-9q6hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.422415 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b93916-e6dc-4aac-809e-0dfe1b11ed1a-kube-api-access-8fz2n" (OuterVolumeSpecName: "kube-api-access-8fz2n") pod "12b93916-e6dc-4aac-809e-0dfe1b11ed1a" (UID: "12b93916-e6dc-4aac-809e-0dfe1b11ed1a"). InnerVolumeSpecName "kube-api-access-8fz2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.440937 4765 scope.go:117] "RemoveContainer" containerID="a3f1da0a157762cc91be03569cf42b0a44b62a1c1885eec7a53e04727abf2412" Jan 21 13:25:48 crc kubenswrapper[4765]: E0121 13:25:48.451509 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3f1da0a157762cc91be03569cf42b0a44b62a1c1885eec7a53e04727abf2412\": container with ID starting with a3f1da0a157762cc91be03569cf42b0a44b62a1c1885eec7a53e04727abf2412 not found: ID does not exist" containerID="a3f1da0a157762cc91be03569cf42b0a44b62a1c1885eec7a53e04727abf2412" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.451565 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f1da0a157762cc91be03569cf42b0a44b62a1c1885eec7a53e04727abf2412"} err="failed to get container status \"a3f1da0a157762cc91be03569cf42b0a44b62a1c1885eec7a53e04727abf2412\": rpc error: code = NotFound desc = could not find container \"a3f1da0a157762cc91be03569cf42b0a44b62a1c1885eec7a53e04727abf2412\": container with ID starting with a3f1da0a157762cc91be03569cf42b0a44b62a1c1885eec7a53e04727abf2412 not found: ID does not exist" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.451602 4765 scope.go:117] "RemoveContainer" containerID="08731cc17f14fe9a4ba2e3add17742e2a164db0b0de19888cd0bc7bc1a7e34c3" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.470635 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fz2n\" (UniqueName: \"kubernetes.io/projected/12b93916-e6dc-4aac-809e-0dfe1b11ed1a-kube-api-access-8fz2n\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.470807 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q6hg\" (UniqueName: \"kubernetes.io/projected/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-kube-api-access-9q6hg\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.504780 4765 scope.go:117] "RemoveContainer" containerID="1947d7bb0442057606477c70c9ff3c80288f4e3c10938db0df6cbda3a5e6fe44" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.544511 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0bd7ae01-c989-4e75-bc95-4c39a5fb8670" (UID: "0bd7ae01-c989-4e75-bc95-4c39a5fb8670"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.559774 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0bd7ae01-c989-4e75-bc95-4c39a5fb8670" (UID: "0bd7ae01-c989-4e75-bc95-4c39a5fb8670"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.560936 4765 scope.go:117] "RemoveContainer" containerID="08731cc17f14fe9a4ba2e3add17742e2a164db0b0de19888cd0bc7bc1a7e34c3" Jan 21 13:25:48 crc kubenswrapper[4765]: E0121 13:25:48.562678 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08731cc17f14fe9a4ba2e3add17742e2a164db0b0de19888cd0bc7bc1a7e34c3\": container with ID starting with 08731cc17f14fe9a4ba2e3add17742e2a164db0b0de19888cd0bc7bc1a7e34c3 not found: ID does not exist" containerID="08731cc17f14fe9a4ba2e3add17742e2a164db0b0de19888cd0bc7bc1a7e34c3" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.562717 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08731cc17f14fe9a4ba2e3add17742e2a164db0b0de19888cd0bc7bc1a7e34c3"} err="failed to get container status \"08731cc17f14fe9a4ba2e3add17742e2a164db0b0de19888cd0bc7bc1a7e34c3\": rpc error: code = NotFound desc = could not find container \"08731cc17f14fe9a4ba2e3add17742e2a164db0b0de19888cd0bc7bc1a7e34c3\": container with ID starting with 08731cc17f14fe9a4ba2e3add17742e2a164db0b0de19888cd0bc7bc1a7e34c3 not found: ID does not exist" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.562745 4765 scope.go:117] "RemoveContainer" containerID="1947d7bb0442057606477c70c9ff3c80288f4e3c10938db0df6cbda3a5e6fe44" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.562848 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-config" (OuterVolumeSpecName: "config") pod "0bd7ae01-c989-4e75-bc95-4c39a5fb8670" (UID: "0bd7ae01-c989-4e75-bc95-4c39a5fb8670"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:25:48 crc kubenswrapper[4765]: E0121 13:25:48.563298 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1947d7bb0442057606477c70c9ff3c80288f4e3c10938db0df6cbda3a5e6fe44\": container with ID starting with 1947d7bb0442057606477c70c9ff3c80288f4e3c10938db0df6cbda3a5e6fe44 not found: ID does not exist" containerID="1947d7bb0442057606477c70c9ff3c80288f4e3c10938db0df6cbda3a5e6fe44" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.563322 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1947d7bb0442057606477c70c9ff3c80288f4e3c10938db0df6cbda3a5e6fe44"} err="failed to get container status \"1947d7bb0442057606477c70c9ff3c80288f4e3c10938db0df6cbda3a5e6fe44\": rpc error: code = NotFound desc = could not find container \"1947d7bb0442057606477c70c9ff3c80288f4e3c10938db0df6cbda3a5e6fe44\": container with ID starting with 1947d7bb0442057606477c70c9ff3c80288f4e3c10938db0df6cbda3a5e6fe44 not found: ID does not exist" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.572267 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.572568 4765 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.572641 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.576467 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0bd7ae01-c989-4e75-bc95-4c39a5fb8670" (UID: "0bd7ae01-c989-4e75-bc95-4c39a5fb8670"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.585758 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0bd7ae01-c989-4e75-bc95-4c39a5fb8670" (UID: "0bd7ae01-c989-4e75-bc95-4c39a5fb8670"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.655344 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.664779 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.675936 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.675970 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bd7ae01-c989-4e75-bc95-4c39a5fb8670-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.684621 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:25:48 crc kubenswrapper[4765]: E0121 13:25:48.685068 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd7ae01-c989-4e75-bc95-4c39a5fb8670" containerName="dnsmasq-dns" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.685093 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd7ae01-c989-4e75-bc95-4c39a5fb8670" containerName="dnsmasq-dns" Jan 21 13:25:48 crc kubenswrapper[4765]: E0121 13:25:48.685136 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b93916-e6dc-4aac-809e-0dfe1b11ed1a" containerName="kube-state-metrics" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.685143 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b93916-e6dc-4aac-809e-0dfe1b11ed1a" containerName="kube-state-metrics" Jan 21 13:25:48 crc kubenswrapper[4765]: E0121 13:25:48.685164 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd7ae01-c989-4e75-bc95-4c39a5fb8670" containerName="init" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.685171 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd7ae01-c989-4e75-bc95-4c39a5fb8670" containerName="init" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.685359 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd7ae01-c989-4e75-bc95-4c39a5fb8670" containerName="dnsmasq-dns" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.685377 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b93916-e6dc-4aac-809e-0dfe1b11ed1a" containerName="kube-state-metrics" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.686020 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.714828 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.715247 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.723384 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-vcvq5"] Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.733861 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-vcvq5"] Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.742083 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.777782 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a\") " pod="openstack/kube-state-metrics-0" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.777952 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a\") " pod="openstack/kube-state-metrics-0" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.778145 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a\") " pod="openstack/kube-state-metrics-0" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.778282 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drfz9\" (UniqueName: \"kubernetes.io/projected/f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a-kube-api-access-drfz9\") pod \"kube-state-metrics-0\" (UID: \"f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a\") " pod="openstack/kube-state-metrics-0" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.879765 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a\") " pod="openstack/kube-state-metrics-0" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.879877 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a\") " pod="openstack/kube-state-metrics-0" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.879989 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a\") " pod="openstack/kube-state-metrics-0" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.880044 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drfz9\" (UniqueName: \"kubernetes.io/projected/f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a-kube-api-access-drfz9\") pod \"kube-state-metrics-0\" (UID: \"f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a\") " pod="openstack/kube-state-metrics-0" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.885265 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a\") " pod="openstack/kube-state-metrics-0" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.885292 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a\") " pod="openstack/kube-state-metrics-0" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.886732 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a\") " pod="openstack/kube-state-metrics-0" Jan 21 13:25:48 crc kubenswrapper[4765]: I0121 13:25:48.896898 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drfz9\" (UniqueName: \"kubernetes.io/projected/f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a-kube-api-access-drfz9\") pod \"kube-state-metrics-0\" (UID: \"f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a\") " pod="openstack/kube-state-metrics-0" Jan 21 13:25:49 crc kubenswrapper[4765]: I0121 13:25:49.049302 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 13:25:49 crc kubenswrapper[4765]: I0121 13:25:49.378381 4765 generic.go:334] "Generic (PLEG): container finished" podID="ded68b6f-882a-4df2-afc6-760c969f9724" containerID="16f04dfe7f499fa61852a48d4680778d3746576e8f5979308d34b4da1b26aaa8" exitCode=0 Jan 21 13:25:49 crc kubenswrapper[4765]: I0121 13:25:49.378562 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fd76t" event={"ID":"ded68b6f-882a-4df2-afc6-760c969f9724","Type":"ContainerDied","Data":"16f04dfe7f499fa61852a48d4680778d3746576e8f5979308d34b4da1b26aaa8"} Jan 21 13:25:49 crc kubenswrapper[4765]: I0121 13:25:49.607841 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 13:25:49 crc kubenswrapper[4765]: I0121 13:25:49.659018 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd7ae01-c989-4e75-bc95-4c39a5fb8670" path="/var/lib/kubelet/pods/0bd7ae01-c989-4e75-bc95-4c39a5fb8670/volumes" Jan 21 13:25:49 crc kubenswrapper[4765]: I0121 13:25:49.659770 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b93916-e6dc-4aac-809e-0dfe1b11ed1a" path="/var/lib/kubelet/pods/12b93916-e6dc-4aac-809e-0dfe1b11ed1a/volumes" Jan 21 13:25:49 crc kubenswrapper[4765]: I0121 13:25:49.959965 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.009121 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-config-data\") pod \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.009182 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-combined-ca-bundle\") pod \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.009261 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-scripts\") pod \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.009300 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jzs8\" (UniqueName: \"kubernetes.io/projected/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-kube-api-access-2jzs8\") pod \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\" (UID: \"2dc89bc6-a242-4876-bf76-4d93cbc8d55d\") " Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.018461 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-scripts" (OuterVolumeSpecName: "scripts") pod "2dc89bc6-a242-4876-bf76-4d93cbc8d55d" (UID: "2dc89bc6-a242-4876-bf76-4d93cbc8d55d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.019445 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-kube-api-access-2jzs8" (OuterVolumeSpecName: "kube-api-access-2jzs8") pod "2dc89bc6-a242-4876-bf76-4d93cbc8d55d" (UID: "2dc89bc6-a242-4876-bf76-4d93cbc8d55d"). InnerVolumeSpecName "kube-api-access-2jzs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.059039 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dc89bc6-a242-4876-bf76-4d93cbc8d55d" (UID: "2dc89bc6-a242-4876-bf76-4d93cbc8d55d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.061415 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-config-data" (OuterVolumeSpecName: "config-data") pod "2dc89bc6-a242-4876-bf76-4d93cbc8d55d" (UID: "2dc89bc6-a242-4876-bf76-4d93cbc8d55d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.111362 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.111397 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.111407 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.111420 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jzs8\" (UniqueName: \"kubernetes.io/projected/2dc89bc6-a242-4876-bf76-4d93cbc8d55d-kube-api-access-2jzs8\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.388893 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-j6x8m" event={"ID":"2dc89bc6-a242-4876-bf76-4d93cbc8d55d","Type":"ContainerDied","Data":"c218acdbb548c3b34c4a0099610289d4ef01a5a0f2e5cf8e4e19becbfa7b2c7b"} Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.388968 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c218acdbb548c3b34c4a0099610289d4ef01a5a0f2e5cf8e4e19becbfa7b2c7b" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.388906 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-j6x8m" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.402917 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a","Type":"ContainerStarted","Data":"4694965ee5e82751792eeb49de7197fc2a850517a971d1fcea83d90ade28789a"} Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.402960 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a","Type":"ContainerStarted","Data":"60f38620c91cceb2301719d891e97e22b0b05e415fba0fa85e6319d1426d99b3"} Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.407420 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.436351 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.985716587 podStartE2EDuration="2.436327859s" podCreationTimestamp="2026-01-21 13:25:48 +0000 UTC" firstStartedPulling="2026-01-21 13:25:49.597433002 +0000 UTC m=+1410.615158824" lastFinishedPulling="2026-01-21 13:25:50.048044274 +0000 UTC m=+1411.065770096" observedRunningTime="2026-01-21 13:25:50.432838129 +0000 UTC m=+1411.450563951" watchObservedRunningTime="2026-01-21 13:25:50.436327859 +0000 UTC m=+1411.454053681" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.620971 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.621600 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dfbf6137-ef01-4260-8dca-c82ce4c55dd7" containerName="nova-api-log" containerID="cri-o://9ec58256798dac0bb5daceca61fb534749ea65bae0ebcbb450ebfb4dc7a817e6" gracePeriod=30 Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.622101 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dfbf6137-ef01-4260-8dca-c82ce4c55dd7" containerName="nova-api-api" containerID="cri-o://fbb45893fb5b25629cb09ce0ec8268953af9a14afb370d7090b818b5a84cc958" gracePeriod=30 Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.698093 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.698381 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9" containerName="nova-scheduler-scheduler" containerID="cri-o://565fef1513865cea70e7ad9897596a6ebd3cc0a19b12dc4a090961d2dd88de3a" gracePeriod=30 Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.722896 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.723341 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7c893cff-6e87-4cd6-b439-a77dfefa7c7b" containerName="nova-metadata-log" containerID="cri-o://96add7bd136ef8827cdd052a1922bdd37f00aca32cab90a78b94db5def98ce39" gracePeriod=30 Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.723827 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7c893cff-6e87-4cd6-b439-a77dfefa7c7b" containerName="nova-metadata-metadata" containerID="cri-o://20c518d9c30899f573e6b9332208ea2530cf103a6ccb95cba1d3face746c976a" gracePeriod=30 Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.746445 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.746493 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.914675 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.948525 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-combined-ca-bundle\") pod \"ded68b6f-882a-4df2-afc6-760c969f9724\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.948671 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brs7b\" (UniqueName: \"kubernetes.io/projected/ded68b6f-882a-4df2-afc6-760c969f9724-kube-api-access-brs7b\") pod \"ded68b6f-882a-4df2-afc6-760c969f9724\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.948800 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-scripts\") pod \"ded68b6f-882a-4df2-afc6-760c969f9724\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.948833 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-config-data\") pod \"ded68b6f-882a-4df2-afc6-760c969f9724\" (UID: \"ded68b6f-882a-4df2-afc6-760c969f9724\") " Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.972482 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ded68b6f-882a-4df2-afc6-760c969f9724-kube-api-access-brs7b" (OuterVolumeSpecName: "kube-api-access-brs7b") pod "ded68b6f-882a-4df2-afc6-760c969f9724" (UID: "ded68b6f-882a-4df2-afc6-760c969f9724"). InnerVolumeSpecName "kube-api-access-brs7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.975385 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-scripts" (OuterVolumeSpecName: "scripts") pod "ded68b6f-882a-4df2-afc6-760c969f9724" (UID: "ded68b6f-882a-4df2-afc6-760c969f9724"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:50 crc kubenswrapper[4765]: I0121 13:25:50.999306 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ded68b6f-882a-4df2-afc6-760c969f9724" (UID: "ded68b6f-882a-4df2-afc6-760c969f9724"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.009426 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-config-data" (OuterVolumeSpecName: "config-data") pod "ded68b6f-882a-4df2-afc6-760c969f9724" (UID: "ded68b6f-882a-4df2-afc6-760c969f9724"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.051052 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.051618 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.051855 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded68b6f-882a-4df2-afc6-760c969f9724-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.052235 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brs7b\" (UniqueName: \"kubernetes.io/projected/ded68b6f-882a-4df2-afc6-760c969f9724-kube-api-access-brs7b\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.090069 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.090350 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="ceilometer-central-agent" containerID="cri-o://29245bc8840badecf93e9f0b20ca87b361bedc52a01b76e4b6eea0818764966d" gracePeriod=30 Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.090465 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="proxy-httpd" containerID="cri-o://cd8977f9ca18e3b957bce3f63e6542809c46e517bf63610e96d17ec5b0b5ff9e" gracePeriod=30 Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.090504 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="sg-core" containerID="cri-o://b3d2fe8470a06b0e221cc2677ca2f89ef5fd7561ed66b7b1c81e02af50d85a12" gracePeriod=30 Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.090546 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="ceilometer-notification-agent" containerID="cri-o://59afb391cdacd27b9f7a2666cb850805cb893ba3455da77828c89b984f57d6e4" gracePeriod=30 Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.429345 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fd76t" event={"ID":"ded68b6f-882a-4df2-afc6-760c969f9724","Type":"ContainerDied","Data":"ab75d0dec5db883d4af76b6942e6dcfa178103ddf1259045c5d94a959d8fca9d"} Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.429624 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab75d0dec5db883d4af76b6942e6dcfa178103ddf1259045c5d94a959d8fca9d" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.429438 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fd76t" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.489909 4765 generic.go:334] "Generic (PLEG): container finished" podID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerID="cd8977f9ca18e3b957bce3f63e6542809c46e517bf63610e96d17ec5b0b5ff9e" exitCode=0 Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.489968 4765 generic.go:334] "Generic (PLEG): container finished" podID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerID="b3d2fe8470a06b0e221cc2677ca2f89ef5fd7561ed66b7b1c81e02af50d85a12" exitCode=2 Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.490074 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f3546f-0a2b-4529-b685-8674eb662a8b","Type":"ContainerDied","Data":"cd8977f9ca18e3b957bce3f63e6542809c46e517bf63610e96d17ec5b0b5ff9e"} Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.490125 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f3546f-0a2b-4529-b685-8674eb662a8b","Type":"ContainerDied","Data":"b3d2fe8470a06b0e221cc2677ca2f89ef5fd7561ed66b7b1c81e02af50d85a12"} Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.507301 4765 generic.go:334] "Generic (PLEG): container finished" podID="7c893cff-6e87-4cd6-b439-a77dfefa7c7b" containerID="20c518d9c30899f573e6b9332208ea2530cf103a6ccb95cba1d3face746c976a" exitCode=0 Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.507343 4765 generic.go:334] "Generic (PLEG): container finished" podID="7c893cff-6e87-4cd6-b439-a77dfefa7c7b" containerID="96add7bd136ef8827cdd052a1922bdd37f00aca32cab90a78b94db5def98ce39" exitCode=143 Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.507395 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7c893cff-6e87-4cd6-b439-a77dfefa7c7b","Type":"ContainerDied","Data":"20c518d9c30899f573e6b9332208ea2530cf103a6ccb95cba1d3face746c976a"} Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.507431 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7c893cff-6e87-4cd6-b439-a77dfefa7c7b","Type":"ContainerDied","Data":"96add7bd136ef8827cdd052a1922bdd37f00aca32cab90a78b94db5def98ce39"} Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.510956 4765 generic.go:334] "Generic (PLEG): container finished" podID="dfbf6137-ef01-4260-8dca-c82ce4c55dd7" containerID="fbb45893fb5b25629cb09ce0ec8268953af9a14afb370d7090b818b5a84cc958" exitCode=0 Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.510980 4765 generic.go:334] "Generic (PLEG): container finished" podID="dfbf6137-ef01-4260-8dca-c82ce4c55dd7" containerID="9ec58256798dac0bb5daceca61fb534749ea65bae0ebcbb450ebfb4dc7a817e6" exitCode=143 Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.511141 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfbf6137-ef01-4260-8dca-c82ce4c55dd7","Type":"ContainerDied","Data":"fbb45893fb5b25629cb09ce0ec8268953af9a14afb370d7090b818b5a84cc958"} Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.511223 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfbf6137-ef01-4260-8dca-c82ce4c55dd7","Type":"ContainerDied","Data":"9ec58256798dac0bb5daceca61fb534749ea65bae0ebcbb450ebfb4dc7a817e6"} Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.538108 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 13:25:51 crc kubenswrapper[4765]: E0121 13:25:51.538677 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded68b6f-882a-4df2-afc6-760c969f9724" containerName="nova-cell1-conductor-db-sync" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.538700 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded68b6f-882a-4df2-afc6-760c969f9724" containerName="nova-cell1-conductor-db-sync" Jan 21 13:25:51 crc kubenswrapper[4765]: E0121 13:25:51.538734 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc89bc6-a242-4876-bf76-4d93cbc8d55d" containerName="nova-manage" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.538744 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc89bc6-a242-4876-bf76-4d93cbc8d55d" containerName="nova-manage" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.539007 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc89bc6-a242-4876-bf76-4d93cbc8d55d" containerName="nova-manage" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.539036 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded68b6f-882a-4df2-afc6-760c969f9724" containerName="nova-cell1-conductor-db-sync" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.539879 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.553784 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.572876 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f30caf-f36a-421c-b3fc-40d01f40d9e7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"90f30caf-f36a-421c-b3fc-40d01f40d9e7\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.573051 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9s84\" (UniqueName: \"kubernetes.io/projected/90f30caf-f36a-421c-b3fc-40d01f40d9e7-kube-api-access-j9s84\") pod \"nova-cell1-conductor-0\" (UID: \"90f30caf-f36a-421c-b3fc-40d01f40d9e7\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.573527 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f30caf-f36a-421c-b3fc-40d01f40d9e7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"90f30caf-f36a-421c-b3fc-40d01f40d9e7\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.579925 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.676684 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f30caf-f36a-421c-b3fc-40d01f40d9e7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"90f30caf-f36a-421c-b3fc-40d01f40d9e7\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.677440 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f30caf-f36a-421c-b3fc-40d01f40d9e7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"90f30caf-f36a-421c-b3fc-40d01f40d9e7\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.677509 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9s84\" (UniqueName: \"kubernetes.io/projected/90f30caf-f36a-421c-b3fc-40d01f40d9e7-kube-api-access-j9s84\") pod \"nova-cell1-conductor-0\" (UID: \"90f30caf-f36a-421c-b3fc-40d01f40d9e7\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.683729 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f30caf-f36a-421c-b3fc-40d01f40d9e7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"90f30caf-f36a-421c-b3fc-40d01f40d9e7\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.687405 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f30caf-f36a-421c-b3fc-40d01f40d9e7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"90f30caf-f36a-421c-b3fc-40d01f40d9e7\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.692197 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.701740 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9s84\" (UniqueName: \"kubernetes.io/projected/90f30caf-f36a-421c-b3fc-40d01f40d9e7-kube-api-access-j9s84\") pod \"nova-cell1-conductor-0\" (UID: \"90f30caf-f36a-421c-b3fc-40d01f40d9e7\") " pod="openstack/nova-cell1-conductor-0" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.778435 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-config-data\") pod \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.779022 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-combined-ca-bundle\") pod \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.779288 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tv9h\" (UniqueName: \"kubernetes.io/projected/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-kube-api-access-6tv9h\") pod \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.779323 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-logs\") pod \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\" (UID: \"dfbf6137-ef01-4260-8dca-c82ce4c55dd7\") " Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.780673 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-logs" (OuterVolumeSpecName: "logs") pod "dfbf6137-ef01-4260-8dca-c82ce4c55dd7" (UID: "dfbf6137-ef01-4260-8dca-c82ce4c55dd7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.786479 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-kube-api-access-6tv9h" (OuterVolumeSpecName: "kube-api-access-6tv9h") pod "dfbf6137-ef01-4260-8dca-c82ce4c55dd7" (UID: "dfbf6137-ef01-4260-8dca-c82ce4c55dd7"). InnerVolumeSpecName "kube-api-access-6tv9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.809230 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-config-data" (OuterVolumeSpecName: "config-data") pod "dfbf6137-ef01-4260-8dca-c82ce4c55dd7" (UID: "dfbf6137-ef01-4260-8dca-c82ce4c55dd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.829482 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfbf6137-ef01-4260-8dca-c82ce4c55dd7" (UID: "dfbf6137-ef01-4260-8dca-c82ce4c55dd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.830387 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.881299 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-combined-ca-bundle\") pod \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.881367 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-config-data\") pod \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.881401 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-248g9\" (UniqueName: \"kubernetes.io/projected/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-kube-api-access-248g9\") pod \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.881419 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-logs\") pod \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.881476 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-nova-metadata-tls-certs\") pod \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\" (UID: \"7c893cff-6e87-4cd6-b439-a77dfefa7c7b\") " Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.881931 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.881949 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.881960 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tv9h\" (UniqueName: \"kubernetes.io/projected/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-kube-api-access-6tv9h\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.881969 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfbf6137-ef01-4260-8dca-c82ce4c55dd7-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.884923 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-logs" (OuterVolumeSpecName: "logs") pod "7c893cff-6e87-4cd6-b439-a77dfefa7c7b" (UID: "7c893cff-6e87-4cd6-b439-a77dfefa7c7b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.892461 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-kube-api-access-248g9" (OuterVolumeSpecName: "kube-api-access-248g9") pod "7c893cff-6e87-4cd6-b439-a77dfefa7c7b" (UID: "7c893cff-6e87-4cd6-b439-a77dfefa7c7b"). InnerVolumeSpecName "kube-api-access-248g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.905780 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.919346 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-config-data" (OuterVolumeSpecName: "config-data") pod "7c893cff-6e87-4cd6-b439-a77dfefa7c7b" (UID: "7c893cff-6e87-4cd6-b439-a77dfefa7c7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.953721 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7c893cff-6e87-4cd6-b439-a77dfefa7c7b" (UID: "7c893cff-6e87-4cd6-b439-a77dfefa7c7b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.968406 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c893cff-6e87-4cd6-b439-a77dfefa7c7b" (UID: "7c893cff-6e87-4cd6-b439-a77dfefa7c7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.990874 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.990943 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.990955 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-248g9\" (UniqueName: \"kubernetes.io/projected/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-kube-api-access-248g9\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.991121 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:51 crc kubenswrapper[4765]: I0121 13:25:51.991160 4765 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c893cff-6e87-4cd6-b439-a77dfefa7c7b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.504096 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.523733 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7c893cff-6e87-4cd6-b439-a77dfefa7c7b","Type":"ContainerDied","Data":"08d8c3d1a70bfae1000aaafe1ed23f5d0cb314b497b86d02fb3bee914ae2df0e"} Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.523789 4765 scope.go:117] "RemoveContainer" containerID="20c518d9c30899f573e6b9332208ea2530cf103a6ccb95cba1d3face746c976a" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.523939 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.527605 4765 generic.go:334] "Generic (PLEG): container finished" podID="fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9" containerID="565fef1513865cea70e7ad9897596a6ebd3cc0a19b12dc4a090961d2dd88de3a" exitCode=0 Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.527672 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9","Type":"ContainerDied","Data":"565fef1513865cea70e7ad9897596a6ebd3cc0a19b12dc4a090961d2dd88de3a"} Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.537469 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfbf6137-ef01-4260-8dca-c82ce4c55dd7","Type":"ContainerDied","Data":"9ccafa3e58279f85da2abe0b4bbc612c99aef17c58b64183a1d6d681593446e5"} Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.537585 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.552336 4765 generic.go:334] "Generic (PLEG): container finished" podID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerID="29245bc8840badecf93e9f0b20ca87b361bedc52a01b76e4b6eea0818764966d" exitCode=0 Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.552685 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f3546f-0a2b-4529-b685-8674eb662a8b","Type":"ContainerDied","Data":"29245bc8840badecf93e9f0b20ca87b361bedc52a01b76e4b6eea0818764966d"} Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.682278 4765 scope.go:117] "RemoveContainer" containerID="96add7bd136ef8827cdd052a1922bdd37f00aca32cab90a78b94db5def98ce39" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.719195 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.731777 4765 scope.go:117] "RemoveContainer" containerID="fbb45893fb5b25629cb09ce0ec8268953af9a14afb370d7090b818b5a84cc958" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.747383 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.784885 4765 scope.go:117] "RemoveContainer" containerID="9ec58256798dac0bb5daceca61fb534749ea65bae0ebcbb450ebfb4dc7a817e6" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.785143 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.808252 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-config-data\") pod \"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9\" (UID: \"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9\") " Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.808402 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-combined-ca-bundle\") pod \"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9\" (UID: \"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9\") " Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.808499 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvhqd\" (UniqueName: \"kubernetes.io/projected/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-kube-api-access-dvhqd\") pod \"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9\" (UID: \"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9\") " Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.818424 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.838008 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.843643 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-kube-api-access-dvhqd" (OuterVolumeSpecName: "kube-api-access-dvhqd") pod "fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9" (UID: "fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9"). InnerVolumeSpecName "kube-api-access-dvhqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.881188 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:25:52 crc kubenswrapper[4765]: E0121 13:25:52.882240 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c893cff-6e87-4cd6-b439-a77dfefa7c7b" containerName="nova-metadata-log" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.882255 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c893cff-6e87-4cd6-b439-a77dfefa7c7b" containerName="nova-metadata-log" Jan 21 13:25:52 crc kubenswrapper[4765]: E0121 13:25:52.882294 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9" containerName="nova-scheduler-scheduler" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.882306 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9" containerName="nova-scheduler-scheduler" Jan 21 13:25:52 crc kubenswrapper[4765]: E0121 13:25:52.882326 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c893cff-6e87-4cd6-b439-a77dfefa7c7b" containerName="nova-metadata-metadata" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.882333 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c893cff-6e87-4cd6-b439-a77dfefa7c7b" containerName="nova-metadata-metadata" Jan 21 13:25:52 crc kubenswrapper[4765]: E0121 13:25:52.882354 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfbf6137-ef01-4260-8dca-c82ce4c55dd7" containerName="nova-api-api" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.882361 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfbf6137-ef01-4260-8dca-c82ce4c55dd7" containerName="nova-api-api" Jan 21 13:25:52 crc kubenswrapper[4765]: E0121 13:25:52.882373 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfbf6137-ef01-4260-8dca-c82ce4c55dd7" containerName="nova-api-log" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.882379 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfbf6137-ef01-4260-8dca-c82ce4c55dd7" containerName="nova-api-log" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.882677 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c893cff-6e87-4cd6-b439-a77dfefa7c7b" containerName="nova-metadata-metadata" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.882692 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c893cff-6e87-4cd6-b439-a77dfefa7c7b" containerName="nova-metadata-log" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.882718 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfbf6137-ef01-4260-8dca-c82ce4c55dd7" containerName="nova-api-api" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.882736 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfbf6137-ef01-4260-8dca-c82ce4c55dd7" containerName="nova-api-log" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.882752 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9" containerName="nova-scheduler-scheduler" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.887226 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.893727 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.893813 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.897901 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.899469 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.901853 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.911178 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snnp5\" (UniqueName: \"kubernetes.io/projected/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-kube-api-access-snnp5\") pod \"nova-metadata-0\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " pod="openstack/nova-metadata-0" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.911258 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " pod="openstack/nova-metadata-0" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.911329 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " pod="openstack/nova-api-0" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.911358 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-config-data\") pod \"nova-metadata-0\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " pod="openstack/nova-metadata-0" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.911380 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " pod="openstack/nova-metadata-0" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.911406 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-logs\") pod \"nova-metadata-0\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " pod="openstack/nova-metadata-0" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.911430 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-config-data\") pod \"nova-api-0\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " pod="openstack/nova-api-0" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.911458 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-logs\") pod \"nova-api-0\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " pod="openstack/nova-api-0" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.911474 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc49v\" (UniqueName: \"kubernetes.io/projected/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-kube-api-access-xc49v\") pod \"nova-api-0\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " pod="openstack/nova-api-0" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.911550 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvhqd\" (UniqueName: \"kubernetes.io/projected/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-kube-api-access-dvhqd\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.912053 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.914162 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-config-data" (OuterVolumeSpecName: "config-data") pod "fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9" (UID: "fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.915297 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9" (UID: "fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:52 crc kubenswrapper[4765]: I0121 13:25:52.923879 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.013559 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snnp5\" (UniqueName: \"kubernetes.io/projected/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-kube-api-access-snnp5\") pod \"nova-metadata-0\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " pod="openstack/nova-metadata-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.013905 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " pod="openstack/nova-metadata-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.013983 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " pod="openstack/nova-api-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.014009 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-config-data\") pod \"nova-metadata-0\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " pod="openstack/nova-metadata-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.014026 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " pod="openstack/nova-metadata-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.014047 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-logs\") pod \"nova-metadata-0\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " pod="openstack/nova-metadata-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.014072 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-config-data\") pod \"nova-api-0\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " pod="openstack/nova-api-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.014104 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-logs\") pod \"nova-api-0\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " pod="openstack/nova-api-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.014125 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc49v\" (UniqueName: \"kubernetes.io/projected/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-kube-api-access-xc49v\") pod \"nova-api-0\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " pod="openstack/nova-api-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.014270 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.014286 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.015715 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-logs\") pod \"nova-metadata-0\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " pod="openstack/nova-metadata-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.017554 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-logs\") pod \"nova-api-0\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " pod="openstack/nova-api-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.017857 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " pod="openstack/nova-metadata-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.019840 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-config-data\") pod \"nova-api-0\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " pod="openstack/nova-api-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.020420 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-config-data\") pod \"nova-metadata-0\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " pod="openstack/nova-metadata-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.023892 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " pod="openstack/nova-api-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.024261 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " pod="openstack/nova-metadata-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.034290 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc49v\" (UniqueName: \"kubernetes.io/projected/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-kube-api-access-xc49v\") pod \"nova-api-0\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " pod="openstack/nova-api-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.034946 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snnp5\" (UniqueName: \"kubernetes.io/projected/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-kube-api-access-snnp5\") pod \"nova-metadata-0\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " pod="openstack/nova-metadata-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.217627 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.229433 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.283449 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.380663 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c57777f6-gqpgv" podUID="1241b1f0-34c1-401a-b91f-13b72926cc2c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.564461 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.564650 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9","Type":"ContainerDied","Data":"55f6238b77e0e1d9faec2c48c58343f205bf3e745176687703ae0214c86f4a7d"} Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.565029 4765 scope.go:117] "RemoveContainer" containerID="565fef1513865cea70e7ad9897596a6ebd3cc0a19b12dc4a090961d2dd88de3a" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.570037 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"90f30caf-f36a-421c-b3fc-40d01f40d9e7","Type":"ContainerStarted","Data":"065795ebcce5cf6ba388fdbf96e851bd3bb29db8df72a84e9b232f22e2552bb0"} Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.570111 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"90f30caf-f36a-421c-b3fc-40d01f40d9e7","Type":"ContainerStarted","Data":"20afb59a2a3cf3564cff4239091704f5bd8516d7e0536a9b63bd53ae30f017e3"} Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.570177 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.612042 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.61202441 podStartE2EDuration="2.61202441s" podCreationTimestamp="2026-01-21 13:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:25:53.596410347 +0000 UTC m=+1414.614136169" watchObservedRunningTime="2026-01-21 13:25:53.61202441 +0000 UTC m=+1414.629750242" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.647253 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c893cff-6e87-4cd6-b439-a77dfefa7c7b" path="/var/lib/kubelet/pods/7c893cff-6e87-4cd6-b439-a77dfefa7c7b/volumes" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.648265 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfbf6137-ef01-4260-8dca-c82ce4c55dd7" path="/var/lib/kubelet/pods/dfbf6137-ef01-4260-8dca-c82ce4c55dd7/volumes" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.649764 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.662584 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.678284 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.679956 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.682330 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.689098 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.783955 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.832359 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2757aad1-8460-4e56-a626-3b4e332dcc91-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2757aad1-8460-4e56-a626-3b4e332dcc91\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.832523 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2757aad1-8460-4e56-a626-3b4e332dcc91-config-data\") pod \"nova-scheduler-0\" (UID: \"2757aad1-8460-4e56-a626-3b4e332dcc91\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.832557 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx5q4\" (UniqueName: \"kubernetes.io/projected/2757aad1-8460-4e56-a626-3b4e332dcc91-kube-api-access-hx5q4\") pod \"nova-scheduler-0\" (UID: \"2757aad1-8460-4e56-a626-3b4e332dcc91\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.851090 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:25:53 crc kubenswrapper[4765]: W0121 13:25:53.861921 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bb2f2c9_9256_4574_9510_df23c9d5ac0f.slice/crio-cede47ec0e8259f59d6f57dc1b503ec7f1a49b2b8688096802ed0b87543c486a WatchSource:0}: Error finding container cede47ec0e8259f59d6f57dc1b503ec7f1a49b2b8688096802ed0b87543c486a: Status 404 returned error can't find the container with id cede47ec0e8259f59d6f57dc1b503ec7f1a49b2b8688096802ed0b87543c486a Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.934083 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2757aad1-8460-4e56-a626-3b4e332dcc91-config-data\") pod \"nova-scheduler-0\" (UID: \"2757aad1-8460-4e56-a626-3b4e332dcc91\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.934430 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx5q4\" (UniqueName: \"kubernetes.io/projected/2757aad1-8460-4e56-a626-3b4e332dcc91-kube-api-access-hx5q4\") pod \"nova-scheduler-0\" (UID: \"2757aad1-8460-4e56-a626-3b4e332dcc91\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.934497 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2757aad1-8460-4e56-a626-3b4e332dcc91-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2757aad1-8460-4e56-a626-3b4e332dcc91\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.945764 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2757aad1-8460-4e56-a626-3b4e332dcc91-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2757aad1-8460-4e56-a626-3b4e332dcc91\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.947873 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2757aad1-8460-4e56-a626-3b4e332dcc91-config-data\") pod \"nova-scheduler-0\" (UID: \"2757aad1-8460-4e56-a626-3b4e332dcc91\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.953399 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx5q4\" (UniqueName: \"kubernetes.io/projected/2757aad1-8460-4e56-a626-3b4e332dcc91-kube-api-access-hx5q4\") pod \"nova-scheduler-0\" (UID: \"2757aad1-8460-4e56-a626-3b4e332dcc91\") " pod="openstack/nova-scheduler-0" Jan 21 13:25:53 crc kubenswrapper[4765]: I0121 13:25:53.999079 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:25:54 crc kubenswrapper[4765]: I0121 13:25:54.553567 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:25:54 crc kubenswrapper[4765]: W0121 13:25:54.561269 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2757aad1_8460_4e56_a626_3b4e332dcc91.slice/crio-d78df7983816a005041dcede74dd1282b691c801d5c508bb16a78ada564b9a23 WatchSource:0}: Error finding container d78df7983816a005041dcede74dd1282b691c801d5c508bb16a78ada564b9a23: Status 404 returned error can't find the container with id d78df7983816a005041dcede74dd1282b691c801d5c508bb16a78ada564b9a23 Jan 21 13:25:54 crc kubenswrapper[4765]: I0121 13:25:54.583957 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2","Type":"ContainerStarted","Data":"96731ad47a3de7ff47a292c13eac49da838f62f9ec25fad2ace6243acd589a62"} Jan 21 13:25:54 crc kubenswrapper[4765]: I0121 13:25:54.584010 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2","Type":"ContainerStarted","Data":"b226945c0741ed2e8aa7fb1c4705ec73827e6dfd6aa7d242776b766bf1feb4f2"} Jan 21 13:25:54 crc kubenswrapper[4765]: I0121 13:25:54.584026 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2","Type":"ContainerStarted","Data":"9bbdcc948b800ba233274d64ecb6e83ad2359fecfaedc4ca6d1b95f3997abe2b"} Jan 21 13:25:54 crc kubenswrapper[4765]: I0121 13:25:54.587322 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2757aad1-8460-4e56-a626-3b4e332dcc91","Type":"ContainerStarted","Data":"d78df7983816a005041dcede74dd1282b691c801d5c508bb16a78ada564b9a23"} Jan 21 13:25:54 crc kubenswrapper[4765]: I0121 13:25:54.592604 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bb2f2c9-9256-4574-9510-df23c9d5ac0f","Type":"ContainerStarted","Data":"70fb3ea8bf6cb7216863a584d4e7c2ebe24cce30c749b0863db01672e5928dd4"} Jan 21 13:25:54 crc kubenswrapper[4765]: I0121 13:25:54.592642 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bb2f2c9-9256-4574-9510-df23c9d5ac0f","Type":"ContainerStarted","Data":"93e6bc2ef89251545c3f719c2983fb38c0f8ab9eb1381c840b4c99ce6106473c"} Jan 21 13:25:54 crc kubenswrapper[4765]: I0121 13:25:54.592652 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bb2f2c9-9256-4574-9510-df23c9d5ac0f","Type":"ContainerStarted","Data":"cede47ec0e8259f59d6f57dc1b503ec7f1a49b2b8688096802ed0b87543c486a"} Jan 21 13:25:54 crc kubenswrapper[4765]: I0121 13:25:54.618668 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.618646504 podStartE2EDuration="2.618646504s" podCreationTimestamp="2026-01-21 13:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:25:54.603697938 +0000 UTC m=+1415.621423760" watchObservedRunningTime="2026-01-21 13:25:54.618646504 +0000 UTC m=+1415.636372326" Jan 21 13:25:54 crc kubenswrapper[4765]: I0121 13:25:54.646383 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.646361469 podStartE2EDuration="2.646361469s" podCreationTimestamp="2026-01-21 13:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:25:54.642333495 +0000 UTC m=+1415.660059327" watchObservedRunningTime="2026-01-21 13:25:54.646361469 +0000 UTC m=+1415.664087291" Jan 21 13:25:55 crc kubenswrapper[4765]: I0121 13:25:55.612671 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2757aad1-8460-4e56-a626-3b4e332dcc91","Type":"ContainerStarted","Data":"4427edbf26bb569f64774a03b34bc9aa42df5a2eae989aa22a6d444fe7451d6b"} Jan 21 13:25:55 crc kubenswrapper[4765]: I0121 13:25:55.623760 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9" path="/var/lib/kubelet/pods/fdd8ae5d-5ba6-44b2-9af7-a46fe7515dc9/volumes" Jan 21 13:25:55 crc kubenswrapper[4765]: I0121 13:25:55.637230 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.637193835 podStartE2EDuration="2.637193835s" podCreationTimestamp="2026-01-21 13:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:25:55.629770534 +0000 UTC m=+1416.647496386" watchObservedRunningTime="2026-01-21 13:25:55.637193835 +0000 UTC m=+1416.654919647" Jan 21 13:25:56 crc kubenswrapper[4765]: I0121 13:25:56.625663 4765 generic.go:334] "Generic (PLEG): container finished" podID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerID="59afb391cdacd27b9f7a2666cb850805cb893ba3455da77828c89b984f57d6e4" exitCode=0 Jan 21 13:25:56 crc kubenswrapper[4765]: I0121 13:25:56.627274 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f3546f-0a2b-4529-b685-8674eb662a8b","Type":"ContainerDied","Data":"59afb391cdacd27b9f7a2666cb850805cb893ba3455da77828c89b984f57d6e4"} Jan 21 13:25:56 crc kubenswrapper[4765]: I0121 13:25:56.997906 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.125105 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-scripts\") pod \"d0f3546f-0a2b-4529-b685-8674eb662a8b\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.125160 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-combined-ca-bundle\") pod \"d0f3546f-0a2b-4529-b685-8674eb662a8b\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.125231 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f3546f-0a2b-4529-b685-8674eb662a8b-log-httpd\") pod \"d0f3546f-0a2b-4529-b685-8674eb662a8b\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.125283 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x28md\" (UniqueName: \"kubernetes.io/projected/d0f3546f-0a2b-4529-b685-8674eb662a8b-kube-api-access-x28md\") pod \"d0f3546f-0a2b-4529-b685-8674eb662a8b\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.125399 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-sg-core-conf-yaml\") pod \"d0f3546f-0a2b-4529-b685-8674eb662a8b\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.125501 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f3546f-0a2b-4529-b685-8674eb662a8b-run-httpd\") pod \"d0f3546f-0a2b-4529-b685-8674eb662a8b\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.125539 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-config-data\") pod \"d0f3546f-0a2b-4529-b685-8674eb662a8b\" (UID: \"d0f3546f-0a2b-4529-b685-8674eb662a8b\") " Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.126116 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0f3546f-0a2b-4529-b685-8674eb662a8b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d0f3546f-0a2b-4529-b685-8674eb662a8b" (UID: "d0f3546f-0a2b-4529-b685-8674eb662a8b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.127260 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0f3546f-0a2b-4529-b685-8674eb662a8b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d0f3546f-0a2b-4529-b685-8674eb662a8b" (UID: "d0f3546f-0a2b-4529-b685-8674eb662a8b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.138640 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0f3546f-0a2b-4529-b685-8674eb662a8b-kube-api-access-x28md" (OuterVolumeSpecName: "kube-api-access-x28md") pod "d0f3546f-0a2b-4529-b685-8674eb662a8b" (UID: "d0f3546f-0a2b-4529-b685-8674eb662a8b"). InnerVolumeSpecName "kube-api-access-x28md". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.147413 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-scripts" (OuterVolumeSpecName: "scripts") pod "d0f3546f-0a2b-4529-b685-8674eb662a8b" (UID: "d0f3546f-0a2b-4529-b685-8674eb662a8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.158578 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d0f3546f-0a2b-4529-b685-8674eb662a8b" (UID: "d0f3546f-0a2b-4529-b685-8674eb662a8b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.227832 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.227866 4765 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f3546f-0a2b-4529-b685-8674eb662a8b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.227879 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x28md\" (UniqueName: \"kubernetes.io/projected/d0f3546f-0a2b-4529-b685-8674eb662a8b-kube-api-access-x28md\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.227893 4765 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.227903 4765 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0f3546f-0a2b-4529-b685-8674eb662a8b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.245373 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-config-data" (OuterVolumeSpecName: "config-data") pod "d0f3546f-0a2b-4529-b685-8674eb662a8b" (UID: "d0f3546f-0a2b-4529-b685-8674eb662a8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.259531 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0f3546f-0a2b-4529-b685-8674eb662a8b" (UID: "d0f3546f-0a2b-4529-b685-8674eb662a8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.330311 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.330357 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0f3546f-0a2b-4529-b685-8674eb662a8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.635658 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0f3546f-0a2b-4529-b685-8674eb662a8b","Type":"ContainerDied","Data":"b9576de088c2cf26dfa6531bf2219ab75d1d1a81a2d1196ff40eabdc46a5daf6"} Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.636007 4765 scope.go:117] "RemoveContainer" containerID="cd8977f9ca18e3b957bce3f63e6542809c46e517bf63610e96d17ec5b0b5ff9e" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.636160 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.675651 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.706361 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.739850 4765 scope.go:117] "RemoveContainer" containerID="b3d2fe8470a06b0e221cc2677ca2f89ef5fd7561ed66b7b1c81e02af50d85a12" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.756596 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:25:57 crc kubenswrapper[4765]: E0121 13:25:57.758179 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="proxy-httpd" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.758205 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="proxy-httpd" Jan 21 13:25:57 crc kubenswrapper[4765]: E0121 13:25:57.758261 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="ceilometer-central-agent" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.758270 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="ceilometer-central-agent" Jan 21 13:25:57 crc kubenswrapper[4765]: E0121 13:25:57.758305 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="sg-core" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.758312 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="sg-core" Jan 21 13:25:57 crc kubenswrapper[4765]: E0121 13:25:57.758362 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="ceilometer-notification-agent" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.758372 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="ceilometer-notification-agent" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.758823 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="ceilometer-central-agent" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.758871 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="ceilometer-notification-agent" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.758889 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="proxy-httpd" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.758904 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" containerName="sg-core" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.764837 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.767895 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.767996 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.768274 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.797375 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.804838 4765 scope.go:117] "RemoveContainer" containerID="59afb391cdacd27b9f7a2666cb850805cb893ba3455da77828c89b984f57d6e4" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.835032 4765 scope.go:117] "RemoveContainer" containerID="29245bc8840badecf93e9f0b20ca87b361bedc52a01b76e4b6eea0818764966d" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.840204 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40ac07c1-e189-4608-9a34-ec3396095b5e-run-httpd\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.840371 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4blk\" (UniqueName: \"kubernetes.io/projected/40ac07c1-e189-4608-9a34-ec3396095b5e-kube-api-access-j4blk\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.840467 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.840571 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-scripts\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.840645 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40ac07c1-e189-4608-9a34-ec3396095b5e-log-httpd\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.840733 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.840824 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.840907 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-config-data\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.942490 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4blk\" (UniqueName: \"kubernetes.io/projected/40ac07c1-e189-4608-9a34-ec3396095b5e-kube-api-access-j4blk\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.942622 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.942672 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-scripts\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.942716 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40ac07c1-e189-4608-9a34-ec3396095b5e-log-httpd\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.942764 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.942802 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.942847 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-config-data\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.942931 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40ac07c1-e189-4608-9a34-ec3396095b5e-run-httpd\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.943274 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40ac07c1-e189-4608-9a34-ec3396095b5e-log-httpd\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.943473 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40ac07c1-e189-4608-9a34-ec3396095b5e-run-httpd\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.948086 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.948683 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.955419 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.956690 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-scripts\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.964655 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4blk\" (UniqueName: \"kubernetes.io/projected/40ac07c1-e189-4608-9a34-ec3396095b5e-kube-api-access-j4blk\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:57 crc kubenswrapper[4765]: I0121 13:25:57.965358 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-config-data\") pod \"ceilometer-0\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " pod="openstack/ceilometer-0" Jan 21 13:25:58 crc kubenswrapper[4765]: I0121 13:25:58.090722 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:25:58 crc kubenswrapper[4765]: I0121 13:25:58.218031 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 13:25:58 crc kubenswrapper[4765]: I0121 13:25:58.219708 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 13:25:58 crc kubenswrapper[4765]: I0121 13:25:58.601559 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:25:58 crc kubenswrapper[4765]: W0121 13:25:58.605580 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40ac07c1_e189_4608_9a34_ec3396095b5e.slice/crio-531a78e47bdc3066e3f5ccb450c40972204d89f0b28804ebac0d26bee5aa201e WatchSource:0}: Error finding container 531a78e47bdc3066e3f5ccb450c40972204d89f0b28804ebac0d26bee5aa201e: Status 404 returned error can't find the container with id 531a78e47bdc3066e3f5ccb450c40972204d89f0b28804ebac0d26bee5aa201e Jan 21 13:25:58 crc kubenswrapper[4765]: I0121 13:25:58.653240 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40ac07c1-e189-4608-9a34-ec3396095b5e","Type":"ContainerStarted","Data":"531a78e47bdc3066e3f5ccb450c40972204d89f0b28804ebac0d26bee5aa201e"} Jan 21 13:25:59 crc kubenswrapper[4765]: I0121 13:25:59.000745 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 13:25:59 crc kubenswrapper[4765]: I0121 13:25:59.071079 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 13:25:59 crc kubenswrapper[4765]: I0121 13:25:59.627783 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0f3546f-0a2b-4529-b685-8674eb662a8b" path="/var/lib/kubelet/pods/d0f3546f-0a2b-4529-b685-8674eb662a8b/volumes" Jan 21 13:25:59 crc kubenswrapper[4765]: I0121 13:25:59.664162 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40ac07c1-e189-4608-9a34-ec3396095b5e","Type":"ContainerStarted","Data":"8e453dfa63abaf10e3e7ebb054d4727ffa0ac47c630488538022f1245c95da41"} Jan 21 13:26:00 crc kubenswrapper[4765]: I0121 13:26:00.679102 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40ac07c1-e189-4608-9a34-ec3396095b5e","Type":"ContainerStarted","Data":"7a6925cbacb6fa17e2dbdee171a4352fdc1e25bf4c1321794812ef5c210b2df4"} Jan 21 13:26:01 crc kubenswrapper[4765]: I0121 13:26:01.693168 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40ac07c1-e189-4608-9a34-ec3396095b5e","Type":"ContainerStarted","Data":"c880926176ca0ff1f48c4214168eb2527af70b4ab611a801029891161d140b6c"} Jan 21 13:26:02 crc kubenswrapper[4765]: I0121 13:26:02.012098 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 13:26:03 crc kubenswrapper[4765]: I0121 13:26:03.218768 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 13:26:03 crc kubenswrapper[4765]: I0121 13:26:03.219439 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 13:26:03 crc kubenswrapper[4765]: I0121 13:26:03.231000 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 13:26:03 crc kubenswrapper[4765]: I0121 13:26:03.231062 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 13:26:03 crc kubenswrapper[4765]: I0121 13:26:03.280263 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 21 13:26:03 crc kubenswrapper[4765]: I0121 13:26:03.376100 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86c57777f6-gqpgv" podUID="1241b1f0-34c1-401a-b91f-13b72926cc2c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 21 13:26:03 crc kubenswrapper[4765]: I0121 13:26:03.716496 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40ac07c1-e189-4608-9a34-ec3396095b5e","Type":"ContainerStarted","Data":"9268a910527aca58f6d48420d5c50027c5628c24191123f9a85dcddd4ba58aa3"} Jan 21 13:26:03 crc kubenswrapper[4765]: I0121 13:26:03.717718 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 13:26:03 crc kubenswrapper[4765]: I0121 13:26:03.741158 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8985312370000003 podStartE2EDuration="6.741136849s" podCreationTimestamp="2026-01-21 13:25:57 +0000 UTC" firstStartedPulling="2026-01-21 13:25:58.619395515 +0000 UTC m=+1419.637121337" lastFinishedPulling="2026-01-21 13:26:02.462001127 +0000 UTC m=+1423.479726949" observedRunningTime="2026-01-21 13:26:03.737783333 +0000 UTC m=+1424.755509155" watchObservedRunningTime="2026-01-21 13:26:03.741136849 +0000 UTC m=+1424.758862671" Jan 21 13:26:04 crc kubenswrapper[4765]: I0121 13:26:04.000659 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 13:26:04 crc kubenswrapper[4765]: I0121 13:26:04.053393 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 13:26:04 crc kubenswrapper[4765]: I0121 13:26:04.239859 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 13:26:04 crc kubenswrapper[4765]: I0121 13:26:04.239861 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 13:26:04 crc kubenswrapper[4765]: I0121 13:26:04.322532 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1bb2f2c9-9256-4574-9510-df23c9d5ac0f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 13:26:04 crc kubenswrapper[4765]: I0121 13:26:04.322857 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1bb2f2c9-9256-4574-9510-df23c9d5ac0f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 13:26:04 crc kubenswrapper[4765]: I0121 13:26:04.768331 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.658079 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.797246 4765 generic.go:334] "Generic (PLEG): container finished" podID="89aaf065-f456-4efe-bfdd-dafb090e4149" containerID="960a0127400eb29256b1bcfc90b651f706ca0d9b4c1325eefbe661afa86aca1a" exitCode=137 Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.797301 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"89aaf065-f456-4efe-bfdd-dafb090e4149","Type":"ContainerDied","Data":"960a0127400eb29256b1bcfc90b651f706ca0d9b4c1325eefbe661afa86aca1a"} Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.797341 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"89aaf065-f456-4efe-bfdd-dafb090e4149","Type":"ContainerDied","Data":"dbcc81ebf496f1347d9ae588363e9f163c7fd1299bfaea87ab007ca9eb5de01f"} Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.797359 4765 scope.go:117] "RemoveContainer" containerID="960a0127400eb29256b1bcfc90b651f706ca0d9b4c1325eefbe661afa86aca1a" Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.797358 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.824898 4765 scope.go:117] "RemoveContainer" containerID="960a0127400eb29256b1bcfc90b651f706ca0d9b4c1325eefbe661afa86aca1a" Jan 21 13:26:12 crc kubenswrapper[4765]: E0121 13:26:12.825388 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"960a0127400eb29256b1bcfc90b651f706ca0d9b4c1325eefbe661afa86aca1a\": container with ID starting with 960a0127400eb29256b1bcfc90b651f706ca0d9b4c1325eefbe661afa86aca1a not found: ID does not exist" containerID="960a0127400eb29256b1bcfc90b651f706ca0d9b4c1325eefbe661afa86aca1a" Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.825439 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960a0127400eb29256b1bcfc90b651f706ca0d9b4c1325eefbe661afa86aca1a"} err="failed to get container status \"960a0127400eb29256b1bcfc90b651f706ca0d9b4c1325eefbe661afa86aca1a\": rpc error: code = NotFound desc = could not find container \"960a0127400eb29256b1bcfc90b651f706ca0d9b4c1325eefbe661afa86aca1a\": container with ID starting with 960a0127400eb29256b1bcfc90b651f706ca0d9b4c1325eefbe661afa86aca1a not found: ID does not exist" Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.838627 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twps2\" (UniqueName: \"kubernetes.io/projected/89aaf065-f456-4efe-bfdd-dafb090e4149-kube-api-access-twps2\") pod \"89aaf065-f456-4efe-bfdd-dafb090e4149\" (UID: \"89aaf065-f456-4efe-bfdd-dafb090e4149\") " Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.838825 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89aaf065-f456-4efe-bfdd-dafb090e4149-combined-ca-bundle\") pod \"89aaf065-f456-4efe-bfdd-dafb090e4149\" (UID: \"89aaf065-f456-4efe-bfdd-dafb090e4149\") " Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.838979 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89aaf065-f456-4efe-bfdd-dafb090e4149-config-data\") pod \"89aaf065-f456-4efe-bfdd-dafb090e4149\" (UID: \"89aaf065-f456-4efe-bfdd-dafb090e4149\") " Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.849160 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89aaf065-f456-4efe-bfdd-dafb090e4149-kube-api-access-twps2" (OuterVolumeSpecName: "kube-api-access-twps2") pod "89aaf065-f456-4efe-bfdd-dafb090e4149" (UID: "89aaf065-f456-4efe-bfdd-dafb090e4149"). InnerVolumeSpecName "kube-api-access-twps2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.878554 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89aaf065-f456-4efe-bfdd-dafb090e4149-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89aaf065-f456-4efe-bfdd-dafb090e4149" (UID: "89aaf065-f456-4efe-bfdd-dafb090e4149"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.879795 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89aaf065-f456-4efe-bfdd-dafb090e4149-config-data" (OuterVolumeSpecName: "config-data") pod "89aaf065-f456-4efe-bfdd-dafb090e4149" (UID: "89aaf065-f456-4efe-bfdd-dafb090e4149"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.942321 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89aaf065-f456-4efe-bfdd-dafb090e4149-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.942403 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twps2\" (UniqueName: \"kubernetes.io/projected/89aaf065-f456-4efe-bfdd-dafb090e4149-kube-api-access-twps2\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:12 crc kubenswrapper[4765]: I0121 13:26:12.942425 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89aaf065-f456-4efe-bfdd-dafb090e4149-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.152489 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.170744 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.185130 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:26:13 crc kubenswrapper[4765]: E0121 13:26:13.188239 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89aaf065-f456-4efe-bfdd-dafb090e4149" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.188265 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="89aaf065-f456-4efe-bfdd-dafb090e4149" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.188606 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="89aaf065-f456-4efe-bfdd-dafb090e4149" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.191976 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.198671 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.198797 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.199665 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.211863 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.224969 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.225869 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.231259 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.232170 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.233429 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.234753 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.235018 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.237929 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.355167 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9571353-0716-428c-8462-0fa1c4fc8ab3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a9571353-0716-428c-8462-0fa1c4fc8ab3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.355364 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9571353-0716-428c-8462-0fa1c4fc8ab3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a9571353-0716-428c-8462-0fa1c4fc8ab3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.355387 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9571353-0716-428c-8462-0fa1c4fc8ab3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a9571353-0716-428c-8462-0fa1c4fc8ab3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.355444 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9571353-0716-428c-8462-0fa1c4fc8ab3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a9571353-0716-428c-8462-0fa1c4fc8ab3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.355475 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8gds\" (UniqueName: \"kubernetes.io/projected/a9571353-0716-428c-8462-0fa1c4fc8ab3-kube-api-access-s8gds\") pod \"nova-cell1-novncproxy-0\" (UID: \"a9571353-0716-428c-8462-0fa1c4fc8ab3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.456896 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9571353-0716-428c-8462-0fa1c4fc8ab3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a9571353-0716-428c-8462-0fa1c4fc8ab3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.457021 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9571353-0716-428c-8462-0fa1c4fc8ab3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a9571353-0716-428c-8462-0fa1c4fc8ab3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.457042 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9571353-0716-428c-8462-0fa1c4fc8ab3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a9571353-0716-428c-8462-0fa1c4fc8ab3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.457077 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9571353-0716-428c-8462-0fa1c4fc8ab3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a9571353-0716-428c-8462-0fa1c4fc8ab3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.457104 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8gds\" (UniqueName: \"kubernetes.io/projected/a9571353-0716-428c-8462-0fa1c4fc8ab3-kube-api-access-s8gds\") pod \"nova-cell1-novncproxy-0\" (UID: \"a9571353-0716-428c-8462-0fa1c4fc8ab3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.463106 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9571353-0716-428c-8462-0fa1c4fc8ab3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a9571353-0716-428c-8462-0fa1c4fc8ab3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.463481 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9571353-0716-428c-8462-0fa1c4fc8ab3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a9571353-0716-428c-8462-0fa1c4fc8ab3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.465786 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9571353-0716-428c-8462-0fa1c4fc8ab3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a9571353-0716-428c-8462-0fa1c4fc8ab3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.466225 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9571353-0716-428c-8462-0fa1c4fc8ab3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a9571353-0716-428c-8462-0fa1c4fc8ab3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.478894 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8gds\" (UniqueName: \"kubernetes.io/projected/a9571353-0716-428c-8462-0fa1c4fc8ab3-kube-api-access-s8gds\") pod \"nova-cell1-novncproxy-0\" (UID: \"a9571353-0716-428c-8462-0fa1c4fc8ab3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.525078 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.628993 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89aaf065-f456-4efe-bfdd-dafb090e4149" path="/var/lib/kubelet/pods/89aaf065-f456-4efe-bfdd-dafb090e4149/volumes" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.829947 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 13:26:13 crc kubenswrapper[4765]: I0121 13:26:13.878025 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.097785 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-brgf9"] Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.103909 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.129445 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-brgf9"] Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.279756 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.279888 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.281673 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-config\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.281789 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zrw\" (UniqueName: \"kubernetes.io/projected/bedcf2ee-ca90-440d-bc45-7022079ed9e4-kube-api-access-r5zrw\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.282423 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.282489 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.282537 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.384588 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-config\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.384640 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zrw\" (UniqueName: \"kubernetes.io/projected/bedcf2ee-ca90-440d-bc45-7022079ed9e4-kube-api-access-r5zrw\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.384694 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.384720 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.384772 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.384810 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.385575 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.385797 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.386076 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.386810 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.387131 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-config\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.404936 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zrw\" (UniqueName: \"kubernetes.io/projected/bedcf2ee-ca90-440d-bc45-7022079ed9e4-kube-api-access-r5zrw\") pod \"dnsmasq-dns-cd5cbd7b9-brgf9\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.445907 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.446268 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.453641 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.844900 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a9571353-0716-428c-8462-0fa1c4fc8ab3","Type":"ContainerStarted","Data":"b70d0a6050ca1538e017c6c872c49261a7cfba17947590014d1c2e9f274693be"} Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.845165 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a9571353-0716-428c-8462-0fa1c4fc8ab3","Type":"ContainerStarted","Data":"d2745c89a5b4d942a3657478de734b1e1c6fb1b2a35af9d47ccdd8a467d7bda4"} Jan 21 13:26:14 crc kubenswrapper[4765]: I0121 13:26:14.868912 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.8688942179999999 podStartE2EDuration="1.868894218s" podCreationTimestamp="2026-01-21 13:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:26:14.858407387 +0000 UTC m=+1435.876133209" watchObservedRunningTime="2026-01-21 13:26:14.868894218 +0000 UTC m=+1435.886620040" Jan 21 13:26:15 crc kubenswrapper[4765]: I0121 13:26:15.025012 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-brgf9"] Jan 21 13:26:15 crc kubenswrapper[4765]: I0121 13:26:15.854481 4765 generic.go:334] "Generic (PLEG): container finished" podID="bedcf2ee-ca90-440d-bc45-7022079ed9e4" containerID="0e474ad7e1494a3e602a146ea1501408fd141d6b61d6f993d2e3a325c31ee690" exitCode=0 Jan 21 13:26:15 crc kubenswrapper[4765]: I0121 13:26:15.854551 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" event={"ID":"bedcf2ee-ca90-440d-bc45-7022079ed9e4","Type":"ContainerDied","Data":"0e474ad7e1494a3e602a146ea1501408fd141d6b61d6f993d2e3a325c31ee690"} Jan 21 13:26:15 crc kubenswrapper[4765]: I0121 13:26:15.854583 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" event={"ID":"bedcf2ee-ca90-440d-bc45-7022079ed9e4","Type":"ContainerStarted","Data":"ad4335dc772311adc88aa36375437fffa7c008aee8ecee5af1f1e12713d041f7"} Jan 21 13:26:16 crc kubenswrapper[4765]: I0121 13:26:16.867125 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" event={"ID":"bedcf2ee-ca90-440d-bc45-7022079ed9e4","Type":"ContainerStarted","Data":"5343b4efc7b08cd51a85c5f59689c9ed61d251cedf7d3d181d10ada6d64d098a"} Jan 21 13:26:16 crc kubenswrapper[4765]: I0121 13:26:16.867764 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:16 crc kubenswrapper[4765]: I0121 13:26:16.899189 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" podStartSLOduration=2.899171535 podStartE2EDuration="2.899171535s" podCreationTimestamp="2026-01-21 13:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:26:16.888064689 +0000 UTC m=+1437.905790511" watchObservedRunningTime="2026-01-21 13:26:16.899171535 +0000 UTC m=+1437.916897357" Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.235280 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.304797 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.407600 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.407907 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="sg-core" containerID="cri-o://c880926176ca0ff1f48c4214168eb2527af70b4ab611a801029891161d140b6c" gracePeriod=30 Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.408022 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="proxy-httpd" containerID="cri-o://9268a910527aca58f6d48420d5c50027c5628c24191123f9a85dcddd4ba58aa3" gracePeriod=30 Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.408088 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="ceilometer-notification-agent" containerID="cri-o://7a6925cbacb6fa17e2dbdee171a4352fdc1e25bf4c1321794812ef5c210b2df4" gracePeriod=30 Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.407872 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="ceilometer-central-agent" containerID="cri-o://8e453dfa63abaf10e3e7ebb054d4727ffa0ac47c630488538022f1245c95da41" gracePeriod=30 Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.421200 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.200:3000/\": read tcp 10.217.0.2:40630->10.217.0.200:3000: read: connection reset by peer" Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.602675 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.602893 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1bb2f2c9-9256-4574-9510-df23c9d5ac0f" containerName="nova-api-log" containerID="cri-o://93e6bc2ef89251545c3f719c2983fb38c0f8ab9eb1381c840b4c99ce6106473c" gracePeriod=30 Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.603332 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1bb2f2c9-9256-4574-9510-df23c9d5ac0f" containerName="nova-api-api" containerID="cri-o://70fb3ea8bf6cb7216863a584d4e7c2ebe24cce30c749b0863db01672e5928dd4" gracePeriod=30 Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.881409 4765 generic.go:334] "Generic (PLEG): container finished" podID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerID="9268a910527aca58f6d48420d5c50027c5628c24191123f9a85dcddd4ba58aa3" exitCode=0 Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.881457 4765 generic.go:334] "Generic (PLEG): container finished" podID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerID="c880926176ca0ff1f48c4214168eb2527af70b4ab611a801029891161d140b6c" exitCode=2 Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.881530 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40ac07c1-e189-4608-9a34-ec3396095b5e","Type":"ContainerDied","Data":"9268a910527aca58f6d48420d5c50027c5628c24191123f9a85dcddd4ba58aa3"} Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.881664 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40ac07c1-e189-4608-9a34-ec3396095b5e","Type":"ContainerDied","Data":"c880926176ca0ff1f48c4214168eb2527af70b4ab611a801029891161d140b6c"} Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.886331 4765 generic.go:334] "Generic (PLEG): container finished" podID="1bb2f2c9-9256-4574-9510-df23c9d5ac0f" containerID="93e6bc2ef89251545c3f719c2983fb38c0f8ab9eb1381c840b4c99ce6106473c" exitCode=143 Jan 21 13:26:17 crc kubenswrapper[4765]: I0121 13:26:17.886406 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bb2f2c9-9256-4574-9510-df23c9d5ac0f","Type":"ContainerDied","Data":"93e6bc2ef89251545c3f719c2983fb38c0f8ab9eb1381c840b4c99ce6106473c"} Jan 21 13:26:18 crc kubenswrapper[4765]: I0121 13:26:18.525527 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:18 crc kubenswrapper[4765]: I0121 13:26:18.900377 4765 generic.go:334] "Generic (PLEG): container finished" podID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerID="8e453dfa63abaf10e3e7ebb054d4727ffa0ac47c630488538022f1245c95da41" exitCode=0 Jan 21 13:26:18 crc kubenswrapper[4765]: I0121 13:26:18.900432 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40ac07c1-e189-4608-9a34-ec3396095b5e","Type":"ContainerDied","Data":"8e453dfa63abaf10e3e7ebb054d4727ffa0ac47c630488538022f1245c95da41"} Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.589851 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f6k58"] Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.592342 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.602477 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f6k58"] Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.664162 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-86c57777f6-gqpgv" Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.747532 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6558674dbd-lct5s"] Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.747745 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon-log" containerID="cri-o://80bf8f8075aaafb1737281da7be1eba64cc3312c18d9db5a1ce9e20ad270bd85" gracePeriod=30 Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.748176 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" containerID="cri-o://689cae05dcc0e9b9b0adda9d542e5c2b2db33884367706410ddf5bee650aba60" gracePeriod=30 Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.768987 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.785401 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4762a010-6fab-4ee6-bbb8-f5d6669b079a-utilities\") pod \"redhat-operators-f6k58\" (UID: \"4762a010-6fab-4ee6-bbb8-f5d6669b079a\") " pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.785766 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4762a010-6fab-4ee6-bbb8-f5d6669b079a-catalog-content\") pod \"redhat-operators-f6k58\" (UID: \"4762a010-6fab-4ee6-bbb8-f5d6669b079a\") " pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.786410 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt65h\" (UniqueName: \"kubernetes.io/projected/4762a010-6fab-4ee6-bbb8-f5d6669b079a-kube-api-access-kt65h\") pod \"redhat-operators-f6k58\" (UID: \"4762a010-6fab-4ee6-bbb8-f5d6669b079a\") " pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.888953 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4762a010-6fab-4ee6-bbb8-f5d6669b079a-catalog-content\") pod \"redhat-operators-f6k58\" (UID: \"4762a010-6fab-4ee6-bbb8-f5d6669b079a\") " pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.889008 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt65h\" (UniqueName: \"kubernetes.io/projected/4762a010-6fab-4ee6-bbb8-f5d6669b079a-kube-api-access-kt65h\") pod \"redhat-operators-f6k58\" (UID: \"4762a010-6fab-4ee6-bbb8-f5d6669b079a\") " pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.889167 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4762a010-6fab-4ee6-bbb8-f5d6669b079a-utilities\") pod \"redhat-operators-f6k58\" (UID: \"4762a010-6fab-4ee6-bbb8-f5d6669b079a\") " pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.889853 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4762a010-6fab-4ee6-bbb8-f5d6669b079a-utilities\") pod \"redhat-operators-f6k58\" (UID: \"4762a010-6fab-4ee6-bbb8-f5d6669b079a\") " pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.890111 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4762a010-6fab-4ee6-bbb8-f5d6669b079a-catalog-content\") pod \"redhat-operators-f6k58\" (UID: \"4762a010-6fab-4ee6-bbb8-f5d6669b079a\") " pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.921025 4765 generic.go:334] "Generic (PLEG): container finished" podID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerID="7a6925cbacb6fa17e2dbdee171a4352fdc1e25bf4c1321794812ef5c210b2df4" exitCode=0 Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.921190 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt65h\" (UniqueName: \"kubernetes.io/projected/4762a010-6fab-4ee6-bbb8-f5d6669b079a-kube-api-access-kt65h\") pod \"redhat-operators-f6k58\" (UID: \"4762a010-6fab-4ee6-bbb8-f5d6669b079a\") " pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.921660 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40ac07c1-e189-4608-9a34-ec3396095b5e","Type":"ContainerDied","Data":"7a6925cbacb6fa17e2dbdee171a4352fdc1e25bf4c1321794812ef5c210b2df4"} Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.921748 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40ac07c1-e189-4608-9a34-ec3396095b5e","Type":"ContainerDied","Data":"531a78e47bdc3066e3f5ccb450c40972204d89f0b28804ebac0d26bee5aa201e"} Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.921812 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="531a78e47bdc3066e3f5ccb450c40972204d89f0b28804ebac0d26bee5aa201e" Jan 21 13:26:19 crc kubenswrapper[4765]: I0121 13:26:19.922446 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.011891 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.221433 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-sg-core-conf-yaml\") pod \"40ac07c1-e189-4608-9a34-ec3396095b5e\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.221796 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4blk\" (UniqueName: \"kubernetes.io/projected/40ac07c1-e189-4608-9a34-ec3396095b5e-kube-api-access-j4blk\") pod \"40ac07c1-e189-4608-9a34-ec3396095b5e\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.221826 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-config-data\") pod \"40ac07c1-e189-4608-9a34-ec3396095b5e\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.221907 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40ac07c1-e189-4608-9a34-ec3396095b5e-log-httpd\") pod \"40ac07c1-e189-4608-9a34-ec3396095b5e\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.221946 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40ac07c1-e189-4608-9a34-ec3396095b5e-run-httpd\") pod \"40ac07c1-e189-4608-9a34-ec3396095b5e\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.222000 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-combined-ca-bundle\") pod \"40ac07c1-e189-4608-9a34-ec3396095b5e\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.222037 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-scripts\") pod \"40ac07c1-e189-4608-9a34-ec3396095b5e\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.222097 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-ceilometer-tls-certs\") pod \"40ac07c1-e189-4608-9a34-ec3396095b5e\" (UID: \"40ac07c1-e189-4608-9a34-ec3396095b5e\") " Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.223358 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40ac07c1-e189-4608-9a34-ec3396095b5e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "40ac07c1-e189-4608-9a34-ec3396095b5e" (UID: "40ac07c1-e189-4608-9a34-ec3396095b5e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.230058 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40ac07c1-e189-4608-9a34-ec3396095b5e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "40ac07c1-e189-4608-9a34-ec3396095b5e" (UID: "40ac07c1-e189-4608-9a34-ec3396095b5e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.235226 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ac07c1-e189-4608-9a34-ec3396095b5e-kube-api-access-j4blk" (OuterVolumeSpecName: "kube-api-access-j4blk") pod "40ac07c1-e189-4608-9a34-ec3396095b5e" (UID: "40ac07c1-e189-4608-9a34-ec3396095b5e"). InnerVolumeSpecName "kube-api-access-j4blk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.239508 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-scripts" (OuterVolumeSpecName: "scripts") pod "40ac07c1-e189-4608-9a34-ec3396095b5e" (UID: "40ac07c1-e189-4608-9a34-ec3396095b5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.328611 4765 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40ac07c1-e189-4608-9a34-ec3396095b5e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.328638 4765 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40ac07c1-e189-4608-9a34-ec3396095b5e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.328647 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.328656 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4blk\" (UniqueName: \"kubernetes.io/projected/40ac07c1-e189-4608-9a34-ec3396095b5e-kube-api-access-j4blk\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.331461 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "40ac07c1-e189-4608-9a34-ec3396095b5e" (UID: "40ac07c1-e189-4608-9a34-ec3396095b5e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.382421 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "40ac07c1-e189-4608-9a34-ec3396095b5e" (UID: "40ac07c1-e189-4608-9a34-ec3396095b5e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.424460 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40ac07c1-e189-4608-9a34-ec3396095b5e" (UID: "40ac07c1-e189-4608-9a34-ec3396095b5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.430242 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.430280 4765 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.430296 4765 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.437599 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-config-data" (OuterVolumeSpecName: "config-data") pod "40ac07c1-e189-4608-9a34-ec3396095b5e" (UID: "40ac07c1-e189-4608-9a34-ec3396095b5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.528602 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f6k58"] Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.531643 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ac07c1-e189-4608-9a34-ec3396095b5e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.937009 4765 generic.go:334] "Generic (PLEG): container finished" podID="4762a010-6fab-4ee6-bbb8-f5d6669b079a" containerID="f648fd6aa372898604c665640f08dd830134e641fc6339cc6723c4d2b33a9a20" exitCode=0 Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.937133 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.937186 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6k58" event={"ID":"4762a010-6fab-4ee6-bbb8-f5d6669b079a","Type":"ContainerDied","Data":"f648fd6aa372898604c665640f08dd830134e641fc6339cc6723c4d2b33a9a20"} Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.937257 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6k58" event={"ID":"4762a010-6fab-4ee6-bbb8-f5d6669b079a","Type":"ContainerStarted","Data":"0064007af1e654f5f44491c2f96455c7a442eb78cc475d4dd66c6a9e5fa589b6"} Jan 21 13:26:20 crc kubenswrapper[4765]: I0121 13:26:20.996359 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.014310 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.034604 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:26:21 crc kubenswrapper[4765]: E0121 13:26:21.035129 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="ceilometer-central-agent" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.035151 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="ceilometer-central-agent" Jan 21 13:26:21 crc kubenswrapper[4765]: E0121 13:26:21.035180 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="proxy-httpd" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.035188 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="proxy-httpd" Jan 21 13:26:21 crc kubenswrapper[4765]: E0121 13:26:21.035197 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="sg-core" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.035221 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="sg-core" Jan 21 13:26:21 crc kubenswrapper[4765]: E0121 13:26:21.035249 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="ceilometer-notification-agent" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.035257 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="ceilometer-notification-agent" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.035550 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="ceilometer-central-agent" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.035572 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="sg-core" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.035587 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="proxy-httpd" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.035605 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" containerName="ceilometer-notification-agent" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.043228 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.047694 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.047949 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.048137 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.050684 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.145758 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e149475f-fb59-4dd4-92f6-d83b29234528-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.145998 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfnlh\" (UniqueName: \"kubernetes.io/projected/e149475f-fb59-4dd4-92f6-d83b29234528-kube-api-access-rfnlh\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.146059 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e149475f-fb59-4dd4-92f6-d83b29234528-log-httpd\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.146270 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e149475f-fb59-4dd4-92f6-d83b29234528-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.146297 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e149475f-fb59-4dd4-92f6-d83b29234528-scripts\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.146344 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e149475f-fb59-4dd4-92f6-d83b29234528-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.146371 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e149475f-fb59-4dd4-92f6-d83b29234528-run-httpd\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.146394 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e149475f-fb59-4dd4-92f6-d83b29234528-config-data\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.248328 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e149475f-fb59-4dd4-92f6-d83b29234528-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.248369 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e149475f-fb59-4dd4-92f6-d83b29234528-scripts\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.248389 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e149475f-fb59-4dd4-92f6-d83b29234528-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.248424 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e149475f-fb59-4dd4-92f6-d83b29234528-run-httpd\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.248446 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e149475f-fb59-4dd4-92f6-d83b29234528-config-data\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.248508 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e149475f-fb59-4dd4-92f6-d83b29234528-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.248532 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfnlh\" (UniqueName: \"kubernetes.io/projected/e149475f-fb59-4dd4-92f6-d83b29234528-kube-api-access-rfnlh\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.248586 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e149475f-fb59-4dd4-92f6-d83b29234528-log-httpd\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.249028 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e149475f-fb59-4dd4-92f6-d83b29234528-log-httpd\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.249143 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e149475f-fb59-4dd4-92f6-d83b29234528-run-httpd\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.260416 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e149475f-fb59-4dd4-92f6-d83b29234528-config-data\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.262932 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e149475f-fb59-4dd4-92f6-d83b29234528-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.263714 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e149475f-fb59-4dd4-92f6-d83b29234528-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.264510 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e149475f-fb59-4dd4-92f6-d83b29234528-scripts\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.265167 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e149475f-fb59-4dd4-92f6-d83b29234528-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.282155 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfnlh\" (UniqueName: \"kubernetes.io/projected/e149475f-fb59-4dd4-92f6-d83b29234528-kube-api-access-rfnlh\") pod \"ceilometer-0\" (UID: \"e149475f-fb59-4dd4-92f6-d83b29234528\") " pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.377748 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.632165 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40ac07c1-e189-4608-9a34-ec3396095b5e" path="/var/lib/kubelet/pods/40ac07c1-e189-4608-9a34-ec3396095b5e/volumes" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.841423 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.958779 4765 generic.go:334] "Generic (PLEG): container finished" podID="1bb2f2c9-9256-4574-9510-df23c9d5ac0f" containerID="70fb3ea8bf6cb7216863a584d4e7c2ebe24cce30c749b0863db01672e5928dd4" exitCode=0 Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.959234 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bb2f2c9-9256-4574-9510-df23c9d5ac0f","Type":"ContainerDied","Data":"70fb3ea8bf6cb7216863a584d4e7c2ebe24cce30c749b0863db01672e5928dd4"} Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.959269 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1bb2f2c9-9256-4574-9510-df23c9d5ac0f","Type":"ContainerDied","Data":"cede47ec0e8259f59d6f57dc1b503ec7f1a49b2b8688096802ed0b87543c486a"} Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.959288 4765 scope.go:117] "RemoveContainer" containerID="70fb3ea8bf6cb7216863a584d4e7c2ebe24cce30c749b0863db01672e5928dd4" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.959464 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.968035 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-logs\") pod \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.968164 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc49v\" (UniqueName: \"kubernetes.io/projected/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-kube-api-access-xc49v\") pod \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.968349 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-combined-ca-bundle\") pod \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.968399 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-config-data\") pod \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\" (UID: \"1bb2f2c9-9256-4574-9510-df23c9d5ac0f\") " Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.980671 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-logs" (OuterVolumeSpecName: "logs") pod "1bb2f2c9-9256-4574-9510-df23c9d5ac0f" (UID: "1bb2f2c9-9256-4574-9510-df23c9d5ac0f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:26:21 crc kubenswrapper[4765]: I0121 13:26:21.999546 4765 scope.go:117] "RemoveContainer" containerID="93e6bc2ef89251545c3f719c2983fb38c0f8ab9eb1381c840b4c99ce6106473c" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.004514 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-kube-api-access-xc49v" (OuterVolumeSpecName: "kube-api-access-xc49v") pod "1bb2f2c9-9256-4574-9510-df23c9d5ac0f" (UID: "1bb2f2c9-9256-4574-9510-df23c9d5ac0f"). InnerVolumeSpecName "kube-api-access-xc49v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.030462 4765 scope.go:117] "RemoveContainer" containerID="70fb3ea8bf6cb7216863a584d4e7c2ebe24cce30c749b0863db01672e5928dd4" Jan 21 13:26:22 crc kubenswrapper[4765]: E0121 13:26:22.031042 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70fb3ea8bf6cb7216863a584d4e7c2ebe24cce30c749b0863db01672e5928dd4\": container with ID starting with 70fb3ea8bf6cb7216863a584d4e7c2ebe24cce30c749b0863db01672e5928dd4 not found: ID does not exist" containerID="70fb3ea8bf6cb7216863a584d4e7c2ebe24cce30c749b0863db01672e5928dd4" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.031106 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70fb3ea8bf6cb7216863a584d4e7c2ebe24cce30c749b0863db01672e5928dd4"} err="failed to get container status \"70fb3ea8bf6cb7216863a584d4e7c2ebe24cce30c749b0863db01672e5928dd4\": rpc error: code = NotFound desc = could not find container \"70fb3ea8bf6cb7216863a584d4e7c2ebe24cce30c749b0863db01672e5928dd4\": container with ID starting with 70fb3ea8bf6cb7216863a584d4e7c2ebe24cce30c749b0863db01672e5928dd4 not found: ID does not exist" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.031135 4765 scope.go:117] "RemoveContainer" containerID="93e6bc2ef89251545c3f719c2983fb38c0f8ab9eb1381c840b4c99ce6106473c" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.032968 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-config-data" (OuterVolumeSpecName: "config-data") pod "1bb2f2c9-9256-4574-9510-df23c9d5ac0f" (UID: "1bb2f2c9-9256-4574-9510-df23c9d5ac0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:22 crc kubenswrapper[4765]: E0121 13:26:22.033838 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93e6bc2ef89251545c3f719c2983fb38c0f8ab9eb1381c840b4c99ce6106473c\": container with ID starting with 93e6bc2ef89251545c3f719c2983fb38c0f8ab9eb1381c840b4c99ce6106473c not found: ID does not exist" containerID="93e6bc2ef89251545c3f719c2983fb38c0f8ab9eb1381c840b4c99ce6106473c" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.035019 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93e6bc2ef89251545c3f719c2983fb38c0f8ab9eb1381c840b4c99ce6106473c"} err="failed to get container status \"93e6bc2ef89251545c3f719c2983fb38c0f8ab9eb1381c840b4c99ce6106473c\": rpc error: code = NotFound desc = could not find container \"93e6bc2ef89251545c3f719c2983fb38c0f8ab9eb1381c840b4c99ce6106473c\": container with ID starting with 93e6bc2ef89251545c3f719c2983fb38c0f8ab9eb1381c840b4c99ce6106473c not found: ID does not exist" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.043770 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bb2f2c9-9256-4574-9510-df23c9d5ac0f" (UID: "1bb2f2c9-9256-4574-9510-df23c9d5ac0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.070686 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.070715 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.070728 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.070736 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc49v\" (UniqueName: \"kubernetes.io/projected/1bb2f2c9-9256-4574-9510-df23c9d5ac0f-kube-api-access-xc49v\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.097547 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 13:26:22 crc kubenswrapper[4765]: W0121 13:26:22.124759 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode149475f_fb59_4dd4_92f6_d83b29234528.slice/crio-28dc82f6ab435726c56bd4109c86c0889834860dbc44bc9a21c1d649cca6c507 WatchSource:0}: Error finding container 28dc82f6ab435726c56bd4109c86c0889834860dbc44bc9a21c1d649cca6c507: Status 404 returned error can't find the container with id 28dc82f6ab435726c56bd4109c86c0889834860dbc44bc9a21c1d649cca6c507 Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.344459 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.358410 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.371887 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 13:26:22 crc kubenswrapper[4765]: E0121 13:26:22.372522 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb2f2c9-9256-4574-9510-df23c9d5ac0f" containerName="nova-api-api" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.372615 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb2f2c9-9256-4574-9510-df23c9d5ac0f" containerName="nova-api-api" Jan 21 13:26:22 crc kubenswrapper[4765]: E0121 13:26:22.372688 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb2f2c9-9256-4574-9510-df23c9d5ac0f" containerName="nova-api-log" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.372749 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb2f2c9-9256-4574-9510-df23c9d5ac0f" containerName="nova-api-log" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.373009 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bb2f2c9-9256-4574-9510-df23c9d5ac0f" containerName="nova-api-log" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.373081 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bb2f2c9-9256-4574-9510-df23c9d5ac0f" containerName="nova-api-api" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.374127 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.379376 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.379784 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.379947 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.393046 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.478227 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-internal-tls-certs\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.478322 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58nfg\" (UniqueName: \"kubernetes.io/projected/037fd778-55f2-416b-aa29-b74bd3176070-kube-api-access-58nfg\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.478420 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-config-data\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.478474 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/037fd778-55f2-416b-aa29-b74bd3176070-logs\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.478567 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-public-tls-certs\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.478699 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.579957 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.580604 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-internal-tls-certs\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.580747 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58nfg\" (UniqueName: \"kubernetes.io/projected/037fd778-55f2-416b-aa29-b74bd3176070-kube-api-access-58nfg\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.580845 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-config-data\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.580955 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/037fd778-55f2-416b-aa29-b74bd3176070-logs\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.581086 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-public-tls-certs\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.582569 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/037fd778-55f2-416b-aa29-b74bd3176070-logs\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.586763 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-public-tls-certs\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.586793 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-config-data\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.587196 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-internal-tls-certs\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.590424 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.623720 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58nfg\" (UniqueName: \"kubernetes.io/projected/037fd778-55f2-416b-aa29-b74bd3176070-kube-api-access-58nfg\") pod \"nova-api-0\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.695468 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:26:22 crc kubenswrapper[4765]: I0121 13:26:22.983008 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6k58" event={"ID":"4762a010-6fab-4ee6-bbb8-f5d6669b079a","Type":"ContainerStarted","Data":"b3ec7eaba2862341ea5304aeef0631108a71b35b1f4ceb54dc3d993b87264ae4"} Jan 21 13:26:23 crc kubenswrapper[4765]: I0121 13:26:23.002764 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e149475f-fb59-4dd4-92f6-d83b29234528","Type":"ContainerStarted","Data":"28dc82f6ab435726c56bd4109c86c0889834860dbc44bc9a21c1d649cca6c507"} Jan 21 13:26:23 crc kubenswrapper[4765]: I0121 13:26:23.137182 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:26:23 crc kubenswrapper[4765]: I0121 13:26:23.304568 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:26:23 crc kubenswrapper[4765]: I0121 13:26:23.525878 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:23 crc kubenswrapper[4765]: I0121 13:26:23.625150 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bb2f2c9-9256-4574-9510-df23c9d5ac0f" path="/var/lib/kubelet/pods/1bb2f2c9-9256-4574-9510-df23c9d5ac0f/volumes" Jan 21 13:26:23 crc kubenswrapper[4765]: I0121 13:26:23.914667 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.042457 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"037fd778-55f2-416b-aa29-b74bd3176070","Type":"ContainerStarted","Data":"0d2bd56c1b4b3ce981594cdbc5b6f8b85d4e492b0fe2d166d15409c7f93ce572"} Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.042564 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"037fd778-55f2-416b-aa29-b74bd3176070","Type":"ContainerStarted","Data":"70713bdd2d5978f955ccbf1dc90a55716c1caa44c018e42afd65a61827881ea9"} Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.070975 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.362327 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-dkqn8"] Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.363960 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.367820 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.368478 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.454898 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dkqn8"] Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.456409 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.570476 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xq8nf"] Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.570784 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" podUID="995d0c57-db1c-4e45-a405-cc87dc9094da" containerName="dnsmasq-dns" containerID="cri-o://e26f5bd0d8964cadbb8bec920a918029ef026d03a754d329dc29b502f5d6b326" gracePeriod=10 Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.581567 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwhwg\" (UniqueName: \"kubernetes.io/projected/9ea33372-7a63-416b-a934-2f938cf0a212-kube-api-access-lwhwg\") pod \"nova-cell1-cell-mapping-dkqn8\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.581613 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-scripts\") pod \"nova-cell1-cell-mapping-dkqn8\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.582095 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dkqn8\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.582165 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-config-data\") pod \"nova-cell1-cell-mapping-dkqn8\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.683182 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dkqn8\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.683554 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-config-data\") pod \"nova-cell1-cell-mapping-dkqn8\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.683691 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwhwg\" (UniqueName: \"kubernetes.io/projected/9ea33372-7a63-416b-a934-2f938cf0a212-kube-api-access-lwhwg\") pod \"nova-cell1-cell-mapping-dkqn8\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.683774 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-scripts\") pod \"nova-cell1-cell-mapping-dkqn8\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.741408 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-config-data\") pod \"nova-cell1-cell-mapping-dkqn8\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.742125 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-scripts\") pod \"nova-cell1-cell-mapping-dkqn8\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.746438 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dkqn8\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.749133 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwhwg\" (UniqueName: \"kubernetes.io/projected/9ea33372-7a63-416b-a934-2f938cf0a212-kube-api-access-lwhwg\") pod \"nova-cell1-cell-mapping-dkqn8\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:24 crc kubenswrapper[4765]: I0121 13:26:24.980542 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:25 crc kubenswrapper[4765]: I0121 13:26:25.051641 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"037fd778-55f2-416b-aa29-b74bd3176070","Type":"ContainerStarted","Data":"c75e5c74123ab5f20f4043999b3ff0023d004fd87cc64c920bf259c56c770908"} Jan 21 13:26:25 crc kubenswrapper[4765]: I0121 13:26:25.054656 4765 generic.go:334] "Generic (PLEG): container finished" podID="995d0c57-db1c-4e45-a405-cc87dc9094da" containerID="e26f5bd0d8964cadbb8bec920a918029ef026d03a754d329dc29b502f5d6b326" exitCode=0 Jan 21 13:26:25 crc kubenswrapper[4765]: I0121 13:26:25.054720 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" event={"ID":"995d0c57-db1c-4e45-a405-cc87dc9094da","Type":"ContainerDied","Data":"e26f5bd0d8964cadbb8bec920a918029ef026d03a754d329dc29b502f5d6b326"} Jan 21 13:26:25 crc kubenswrapper[4765]: I0121 13:26:25.081983 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.081958255 podStartE2EDuration="3.081958255s" podCreationTimestamp="2026-01-21 13:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:26:25.073225939 +0000 UTC m=+1446.090951751" watchObservedRunningTime="2026-01-21 13:26:25.081958255 +0000 UTC m=+1446.099684077" Jan 21 13:26:25 crc kubenswrapper[4765]: I0121 13:26:25.922544 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.005723 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dkqn8"] Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.025843 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-dns-svc\") pod \"995d0c57-db1c-4e45-a405-cc87dc9094da\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.025959 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lxqh\" (UniqueName: \"kubernetes.io/projected/995d0c57-db1c-4e45-a405-cc87dc9094da-kube-api-access-6lxqh\") pod \"995d0c57-db1c-4e45-a405-cc87dc9094da\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.026004 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-ovsdbserver-nb\") pod \"995d0c57-db1c-4e45-a405-cc87dc9094da\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.026051 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-config\") pod \"995d0c57-db1c-4e45-a405-cc87dc9094da\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.026177 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-ovsdbserver-sb\") pod \"995d0c57-db1c-4e45-a405-cc87dc9094da\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.027455 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-dns-swift-storage-0\") pod \"995d0c57-db1c-4e45-a405-cc87dc9094da\" (UID: \"995d0c57-db1c-4e45-a405-cc87dc9094da\") " Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.071955 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" event={"ID":"995d0c57-db1c-4e45-a405-cc87dc9094da","Type":"ContainerDied","Data":"d5e003f9c21d4db8bb982e6fe32752f53ca3b782854083675c9a878af502b529"} Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.072007 4765 scope.go:117] "RemoveContainer" containerID="e26f5bd0d8964cadbb8bec920a918029ef026d03a754d329dc29b502f5d6b326" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.072146 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-xq8nf" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.076749 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e149475f-fb59-4dd4-92f6-d83b29234528","Type":"ContainerStarted","Data":"c1b1e7d00c0038421d03e6b2fd690225b16db2486d61e3396489edefca4e874d"} Jan 21 13:26:26 crc kubenswrapper[4765]: W0121 13:26:26.165775 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ea33372_7a63_416b_a934_2f938cf0a212.slice/crio-c71e13ba2f37becf25abd79f7e57d9ab755809c28cbcd16291571e555c674673 WatchSource:0}: Error finding container c71e13ba2f37becf25abd79f7e57d9ab755809c28cbcd16291571e555c674673: Status 404 returned error can't find the container with id c71e13ba2f37becf25abd79f7e57d9ab755809c28cbcd16291571e555c674673 Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.181474 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/995d0c57-db1c-4e45-a405-cc87dc9094da-kube-api-access-6lxqh" (OuterVolumeSpecName: "kube-api-access-6lxqh") pod "995d0c57-db1c-4e45-a405-cc87dc9094da" (UID: "995d0c57-db1c-4e45-a405-cc87dc9094da"). InnerVolumeSpecName "kube-api-access-6lxqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.196150 4765 scope.go:117] "RemoveContainer" containerID="3b7f5dbfda35ace929f33bdd1d747c5fa2b7ad7d040800f3e83eb6a42844237e" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.238802 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lxqh\" (UniqueName: \"kubernetes.io/projected/995d0c57-db1c-4e45-a405-cc87dc9094da-kube-api-access-6lxqh\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.244408 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "995d0c57-db1c-4e45-a405-cc87dc9094da" (UID: "995d0c57-db1c-4e45-a405-cc87dc9094da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.265525 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "995d0c57-db1c-4e45-a405-cc87dc9094da" (UID: "995d0c57-db1c-4e45-a405-cc87dc9094da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.267508 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "995d0c57-db1c-4e45-a405-cc87dc9094da" (UID: "995d0c57-db1c-4e45-a405-cc87dc9094da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.269361 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-config" (OuterVolumeSpecName: "config") pod "995d0c57-db1c-4e45-a405-cc87dc9094da" (UID: "995d0c57-db1c-4e45-a405-cc87dc9094da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.270911 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "995d0c57-db1c-4e45-a405-cc87dc9094da" (UID: "995d0c57-db1c-4e45-a405-cc87dc9094da"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.341478 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.341794 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.341938 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.342096 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.342238 4765 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/995d0c57-db1c-4e45-a405-cc87dc9094da-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.971735 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xq8nf"] Jan 21 13:26:26 crc kubenswrapper[4765]: I0121 13:26:26.981662 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-xq8nf"] Jan 21 13:26:27 crc kubenswrapper[4765]: I0121 13:26:27.092129 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dkqn8" event={"ID":"9ea33372-7a63-416b-a934-2f938cf0a212","Type":"ContainerStarted","Data":"c71e13ba2f37becf25abd79f7e57d9ab755809c28cbcd16291571e555c674673"} Jan 21 13:26:27 crc kubenswrapper[4765]: I0121 13:26:27.097701 4765 generic.go:334] "Generic (PLEG): container finished" podID="4762a010-6fab-4ee6-bbb8-f5d6669b079a" containerID="b3ec7eaba2862341ea5304aeef0631108a71b35b1f4ceb54dc3d993b87264ae4" exitCode=0 Jan 21 13:26:27 crc kubenswrapper[4765]: I0121 13:26:27.097925 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6k58" event={"ID":"4762a010-6fab-4ee6-bbb8-f5d6669b079a","Type":"ContainerDied","Data":"b3ec7eaba2862341ea5304aeef0631108a71b35b1f4ceb54dc3d993b87264ae4"} Jan 21 13:26:27 crc kubenswrapper[4765]: I0121 13:26:27.218981 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:59134->10.217.0.149:8443: read: connection reset by peer" Jan 21 13:26:27 crc kubenswrapper[4765]: E0121 13:26:27.228512 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod995d0c57_db1c_4e45_a405_cc87dc9094da.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod995d0c57_db1c_4e45_a405_cc87dc9094da.slice/crio-d5e003f9c21d4db8bb982e6fe32752f53ca3b782854083675c9a878af502b529\": RecentStats: unable to find data in memory cache]" Jan 21 13:26:27 crc kubenswrapper[4765]: I0121 13:26:27.645911 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="995d0c57-db1c-4e45-a405-cc87dc9094da" path="/var/lib/kubelet/pods/995d0c57-db1c-4e45-a405-cc87dc9094da/volumes" Jan 21 13:26:28 crc kubenswrapper[4765]: I0121 13:26:28.113312 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6k58" event={"ID":"4762a010-6fab-4ee6-bbb8-f5d6669b079a","Type":"ContainerStarted","Data":"33b8ad6461fb787879d86af6a94c7ef15715a17004c598cb618efb686eee07c9"} Jan 21 13:26:28 crc kubenswrapper[4765]: I0121 13:26:28.116706 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e149475f-fb59-4dd4-92f6-d83b29234528","Type":"ContainerStarted","Data":"f854c5945f671456758751c3a0011e13264d725886bdb93ca22c3bd681da1dd0"} Jan 21 13:26:28 crc kubenswrapper[4765]: I0121 13:26:28.117082 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e149475f-fb59-4dd4-92f6-d83b29234528","Type":"ContainerStarted","Data":"2cf9ecf8f80500bb36e779e81812007dad4308a4790893e3fa1ebebefbe9bd70"} Jan 21 13:26:28 crc kubenswrapper[4765]: I0121 13:26:28.118953 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dkqn8" event={"ID":"9ea33372-7a63-416b-a934-2f938cf0a212","Type":"ContainerStarted","Data":"6f475475f6593d1407914c9b3427b6d9985ed2c31a852cf7f2809406d2ef4fbb"} Jan 21 13:26:28 crc kubenswrapper[4765]: I0121 13:26:28.122429 4765 generic.go:334] "Generic (PLEG): container finished" podID="074ae613-bc7f-4443-abdb-7010b6054997" containerID="689cae05dcc0e9b9b0adda9d542e5c2b2db33884367706410ddf5bee650aba60" exitCode=0 Jan 21 13:26:28 crc kubenswrapper[4765]: I0121 13:26:28.122483 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6558674dbd-lct5s" event={"ID":"074ae613-bc7f-4443-abdb-7010b6054997","Type":"ContainerDied","Data":"689cae05dcc0e9b9b0adda9d542e5c2b2db33884367706410ddf5bee650aba60"} Jan 21 13:26:28 crc kubenswrapper[4765]: I0121 13:26:28.122521 4765 scope.go:117] "RemoveContainer" containerID="e031dd893b547965535c1708b7e364ac4020188df01de94c0db0612a266dcb98" Jan 21 13:26:28 crc kubenswrapper[4765]: I0121 13:26:28.138052 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f6k58" podStartSLOduration=2.5214415580000002 podStartE2EDuration="9.138035091s" podCreationTimestamp="2026-01-21 13:26:19 +0000 UTC" firstStartedPulling="2026-01-21 13:26:20.945148543 +0000 UTC m=+1441.962874365" lastFinishedPulling="2026-01-21 13:26:27.561742066 +0000 UTC m=+1448.579467898" observedRunningTime="2026-01-21 13:26:28.134779037 +0000 UTC m=+1449.152504859" watchObservedRunningTime="2026-01-21 13:26:28.138035091 +0000 UTC m=+1449.155760913" Jan 21 13:26:28 crc kubenswrapper[4765]: I0121 13:26:28.157342 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-dkqn8" podStartSLOduration=4.157321078 podStartE2EDuration="4.157321078s" podCreationTimestamp="2026-01-21 13:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:26:28.153178171 +0000 UTC m=+1449.170903993" watchObservedRunningTime="2026-01-21 13:26:28.157321078 +0000 UTC m=+1449.175046920" Jan 21 13:26:29 crc kubenswrapper[4765]: I0121 13:26:29.922836 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:29 crc kubenswrapper[4765]: I0121 13:26:29.923466 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:30 crc kubenswrapper[4765]: I0121 13:26:30.145472 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e149475f-fb59-4dd4-92f6-d83b29234528","Type":"ContainerStarted","Data":"34a7bbbac4ccb6435d0d2f921af16407835dc594ab417c07f31eb81cecced4a0"} Jan 21 13:26:30 crc kubenswrapper[4765]: I0121 13:26:30.145945 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 13:26:30 crc kubenswrapper[4765]: I0121 13:26:30.169427 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.09327207 podStartE2EDuration="10.169409376s" podCreationTimestamp="2026-01-21 13:26:20 +0000 UTC" firstStartedPulling="2026-01-21 13:26:22.128820083 +0000 UTC m=+1443.146545905" lastFinishedPulling="2026-01-21 13:26:29.204957389 +0000 UTC m=+1450.222683211" observedRunningTime="2026-01-21 13:26:30.165678019 +0000 UTC m=+1451.183403861" watchObservedRunningTime="2026-01-21 13:26:30.169409376 +0000 UTC m=+1451.187135198" Jan 21 13:26:30 crc kubenswrapper[4765]: I0121 13:26:30.969864 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f6k58" podUID="4762a010-6fab-4ee6-bbb8-f5d6669b079a" containerName="registry-server" probeResult="failure" output=< Jan 21 13:26:30 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Jan 21 13:26:30 crc kubenswrapper[4765]: > Jan 21 13:26:32 crc kubenswrapper[4765]: I0121 13:26:32.696693 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 13:26:32 crc kubenswrapper[4765]: I0121 13:26:32.696750 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 13:26:33 crc kubenswrapper[4765]: I0121 13:26:33.280753 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 21 13:26:33 crc kubenswrapper[4765]: I0121 13:26:33.711498 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="037fd778-55f2-416b-aa29-b74bd3176070" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 13:26:33 crc kubenswrapper[4765]: I0121 13:26:33.711568 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="037fd778-55f2-416b-aa29-b74bd3176070" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 13:26:34 crc kubenswrapper[4765]: I0121 13:26:34.208238 4765 generic.go:334] "Generic (PLEG): container finished" podID="9ea33372-7a63-416b-a934-2f938cf0a212" containerID="6f475475f6593d1407914c9b3427b6d9985ed2c31a852cf7f2809406d2ef4fbb" exitCode=0 Jan 21 13:26:34 crc kubenswrapper[4765]: I0121 13:26:34.208285 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dkqn8" event={"ID":"9ea33372-7a63-416b-a934-2f938cf0a212","Type":"ContainerDied","Data":"6f475475f6593d1407914c9b3427b6d9985ed2c31a852cf7f2809406d2ef4fbb"} Jan 21 13:26:34 crc kubenswrapper[4765]: I0121 13:26:34.807099 4765 scope.go:117] "RemoveContainer" containerID="f931968548c653e721e07126459e74d892d7d62bd1316ea0a0f30d8d2b9d77fc" Jan 21 13:26:35 crc kubenswrapper[4765]: I0121 13:26:35.064153 4765 scope.go:117] "RemoveContainer" containerID="4197c425cde6bc4bd50dfc43741ed3f45500c0e74b4a3e447c454fe3a3f1db29" Jan 21 13:26:35 crc kubenswrapper[4765]: I0121 13:26:35.660172 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:35 crc kubenswrapper[4765]: I0121 13:26:35.850781 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-config-data\") pod \"9ea33372-7a63-416b-a934-2f938cf0a212\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " Jan 21 13:26:35 crc kubenswrapper[4765]: I0121 13:26:35.851143 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-combined-ca-bundle\") pod \"9ea33372-7a63-416b-a934-2f938cf0a212\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " Jan 21 13:26:35 crc kubenswrapper[4765]: I0121 13:26:35.851240 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-scripts\") pod \"9ea33372-7a63-416b-a934-2f938cf0a212\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " Jan 21 13:26:35 crc kubenswrapper[4765]: I0121 13:26:35.851268 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwhwg\" (UniqueName: \"kubernetes.io/projected/9ea33372-7a63-416b-a934-2f938cf0a212-kube-api-access-lwhwg\") pod \"9ea33372-7a63-416b-a934-2f938cf0a212\" (UID: \"9ea33372-7a63-416b-a934-2f938cf0a212\") " Jan 21 13:26:35 crc kubenswrapper[4765]: I0121 13:26:35.872410 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ea33372-7a63-416b-a934-2f938cf0a212-kube-api-access-lwhwg" (OuterVolumeSpecName: "kube-api-access-lwhwg") pod "9ea33372-7a63-416b-a934-2f938cf0a212" (UID: "9ea33372-7a63-416b-a934-2f938cf0a212"). InnerVolumeSpecName "kube-api-access-lwhwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:26:35 crc kubenswrapper[4765]: I0121 13:26:35.876988 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-scripts" (OuterVolumeSpecName: "scripts") pod "9ea33372-7a63-416b-a934-2f938cf0a212" (UID: "9ea33372-7a63-416b-a934-2f938cf0a212"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:35 crc kubenswrapper[4765]: I0121 13:26:35.938269 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ea33372-7a63-416b-a934-2f938cf0a212" (UID: "9ea33372-7a63-416b-a934-2f938cf0a212"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:35 crc kubenswrapper[4765]: I0121 13:26:35.953982 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:35 crc kubenswrapper[4765]: I0121 13:26:35.954019 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:35 crc kubenswrapper[4765]: I0121 13:26:35.954029 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwhwg\" (UniqueName: \"kubernetes.io/projected/9ea33372-7a63-416b-a934-2f938cf0a212-kube-api-access-lwhwg\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:35 crc kubenswrapper[4765]: I0121 13:26:35.965942 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-config-data" (OuterVolumeSpecName: "config-data") pod "9ea33372-7a63-416b-a934-2f938cf0a212" (UID: "9ea33372-7a63-416b-a934-2f938cf0a212"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:36 crc kubenswrapper[4765]: I0121 13:26:36.055998 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ea33372-7a63-416b-a934-2f938cf0a212-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:36 crc kubenswrapper[4765]: I0121 13:26:36.235906 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dkqn8" event={"ID":"9ea33372-7a63-416b-a934-2f938cf0a212","Type":"ContainerDied","Data":"c71e13ba2f37becf25abd79f7e57d9ab755809c28cbcd16291571e555c674673"} Jan 21 13:26:36 crc kubenswrapper[4765]: I0121 13:26:36.235953 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c71e13ba2f37becf25abd79f7e57d9ab755809c28cbcd16291571e555c674673" Jan 21 13:26:36 crc kubenswrapper[4765]: I0121 13:26:36.236064 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dkqn8" Jan 21 13:26:36 crc kubenswrapper[4765]: I0121 13:26:36.441047 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:26:36 crc kubenswrapper[4765]: I0121 13:26:36.441370 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="037fd778-55f2-416b-aa29-b74bd3176070" containerName="nova-api-log" containerID="cri-o://0d2bd56c1b4b3ce981594cdbc5b6f8b85d4e492b0fe2d166d15409c7f93ce572" gracePeriod=30 Jan 21 13:26:36 crc kubenswrapper[4765]: I0121 13:26:36.441453 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="037fd778-55f2-416b-aa29-b74bd3176070" containerName="nova-api-api" containerID="cri-o://c75e5c74123ab5f20f4043999b3ff0023d004fd87cc64c920bf259c56c770908" gracePeriod=30 Jan 21 13:26:36 crc kubenswrapper[4765]: I0121 13:26:36.467905 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:26:36 crc kubenswrapper[4765]: I0121 13:26:36.468128 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2757aad1-8460-4e56-a626-3b4e332dcc91" containerName="nova-scheduler-scheduler" containerID="cri-o://4427edbf26bb569f64774a03b34bc9aa42df5a2eae989aa22a6d444fe7451d6b" gracePeriod=30 Jan 21 13:26:36 crc kubenswrapper[4765]: I0121 13:26:36.547914 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:26:36 crc kubenswrapper[4765]: I0121 13:26:36.548200 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" containerName="nova-metadata-log" containerID="cri-o://b226945c0741ed2e8aa7fb1c4705ec73827e6dfd6aa7d242776b766bf1feb4f2" gracePeriod=30 Jan 21 13:26:36 crc kubenswrapper[4765]: I0121 13:26:36.548304 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" containerName="nova-metadata-metadata" containerID="cri-o://96731ad47a3de7ff47a292c13eac49da838f62f9ec25fad2ace6243acd589a62" gracePeriod=30 Jan 21 13:26:37 crc kubenswrapper[4765]: I0121 13:26:37.246045 4765 generic.go:334] "Generic (PLEG): container finished" podID="de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" containerID="b226945c0741ed2e8aa7fb1c4705ec73827e6dfd6aa7d242776b766bf1feb4f2" exitCode=143 Jan 21 13:26:37 crc kubenswrapper[4765]: I0121 13:26:37.246121 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2","Type":"ContainerDied","Data":"b226945c0741ed2e8aa7fb1c4705ec73827e6dfd6aa7d242776b766bf1feb4f2"} Jan 21 13:26:37 crc kubenswrapper[4765]: I0121 13:26:37.252817 4765 generic.go:334] "Generic (PLEG): container finished" podID="037fd778-55f2-416b-aa29-b74bd3176070" containerID="0d2bd56c1b4b3ce981594cdbc5b6f8b85d4e492b0fe2d166d15409c7f93ce572" exitCode=143 Jan 21 13:26:37 crc kubenswrapper[4765]: I0121 13:26:37.252858 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"037fd778-55f2-416b-aa29-b74bd3176070","Type":"ContainerDied","Data":"0d2bd56c1b4b3ce981594cdbc5b6f8b85d4e492b0fe2d166d15409c7f93ce572"} Jan 21 13:26:38 crc kubenswrapper[4765]: I0121 13:26:38.916999 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.018382 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx5q4\" (UniqueName: \"kubernetes.io/projected/2757aad1-8460-4e56-a626-3b4e332dcc91-kube-api-access-hx5q4\") pod \"2757aad1-8460-4e56-a626-3b4e332dcc91\" (UID: \"2757aad1-8460-4e56-a626-3b4e332dcc91\") " Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.018481 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2757aad1-8460-4e56-a626-3b4e332dcc91-combined-ca-bundle\") pod \"2757aad1-8460-4e56-a626-3b4e332dcc91\" (UID: \"2757aad1-8460-4e56-a626-3b4e332dcc91\") " Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.018579 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2757aad1-8460-4e56-a626-3b4e332dcc91-config-data\") pod \"2757aad1-8460-4e56-a626-3b4e332dcc91\" (UID: \"2757aad1-8460-4e56-a626-3b4e332dcc91\") " Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.025822 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2757aad1-8460-4e56-a626-3b4e332dcc91-kube-api-access-hx5q4" (OuterVolumeSpecName: "kube-api-access-hx5q4") pod "2757aad1-8460-4e56-a626-3b4e332dcc91" (UID: "2757aad1-8460-4e56-a626-3b4e332dcc91"). InnerVolumeSpecName "kube-api-access-hx5q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.053290 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2757aad1-8460-4e56-a626-3b4e332dcc91-config-data" (OuterVolumeSpecName: "config-data") pod "2757aad1-8460-4e56-a626-3b4e332dcc91" (UID: "2757aad1-8460-4e56-a626-3b4e332dcc91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.076668 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2757aad1-8460-4e56-a626-3b4e332dcc91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2757aad1-8460-4e56-a626-3b4e332dcc91" (UID: "2757aad1-8460-4e56-a626-3b4e332dcc91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.120915 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2757aad1-8460-4e56-a626-3b4e332dcc91-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.121170 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx5q4\" (UniqueName: \"kubernetes.io/projected/2757aad1-8460-4e56-a626-3b4e332dcc91-kube-api-access-hx5q4\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.121181 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2757aad1-8460-4e56-a626-3b4e332dcc91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.269388 4765 generic.go:334] "Generic (PLEG): container finished" podID="2757aad1-8460-4e56-a626-3b4e332dcc91" containerID="4427edbf26bb569f64774a03b34bc9aa42df5a2eae989aa22a6d444fe7451d6b" exitCode=0 Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.269440 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.269446 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2757aad1-8460-4e56-a626-3b4e332dcc91","Type":"ContainerDied","Data":"4427edbf26bb569f64774a03b34bc9aa42df5a2eae989aa22a6d444fe7451d6b"} Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.269487 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2757aad1-8460-4e56-a626-3b4e332dcc91","Type":"ContainerDied","Data":"d78df7983816a005041dcede74dd1282b691c801d5c508bb16a78ada564b9a23"} Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.269503 4765 scope.go:117] "RemoveContainer" containerID="4427edbf26bb569f64774a03b34bc9aa42df5a2eae989aa22a6d444fe7451d6b" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.297905 4765 scope.go:117] "RemoveContainer" containerID="4427edbf26bb569f64774a03b34bc9aa42df5a2eae989aa22a6d444fe7451d6b" Jan 21 13:26:39 crc kubenswrapper[4765]: E0121 13:26:39.299057 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4427edbf26bb569f64774a03b34bc9aa42df5a2eae989aa22a6d444fe7451d6b\": container with ID starting with 4427edbf26bb569f64774a03b34bc9aa42df5a2eae989aa22a6d444fe7451d6b not found: ID does not exist" containerID="4427edbf26bb569f64774a03b34bc9aa42df5a2eae989aa22a6d444fe7451d6b" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.299362 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4427edbf26bb569f64774a03b34bc9aa42df5a2eae989aa22a6d444fe7451d6b"} err="failed to get container status \"4427edbf26bb569f64774a03b34bc9aa42df5a2eae989aa22a6d444fe7451d6b\": rpc error: code = NotFound desc = could not find container \"4427edbf26bb569f64774a03b34bc9aa42df5a2eae989aa22a6d444fe7451d6b\": container with ID starting with 4427edbf26bb569f64774a03b34bc9aa42df5a2eae989aa22a6d444fe7451d6b not found: ID does not exist" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.335903 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.351936 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.385817 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:26:39 crc kubenswrapper[4765]: E0121 13:26:39.386348 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995d0c57-db1c-4e45-a405-cc87dc9094da" containerName="dnsmasq-dns" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.386373 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="995d0c57-db1c-4e45-a405-cc87dc9094da" containerName="dnsmasq-dns" Jan 21 13:26:39 crc kubenswrapper[4765]: E0121 13:26:39.386385 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea33372-7a63-416b-a934-2f938cf0a212" containerName="nova-manage" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.386394 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea33372-7a63-416b-a934-2f938cf0a212" containerName="nova-manage" Jan 21 13:26:39 crc kubenswrapper[4765]: E0121 13:26:39.386430 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="995d0c57-db1c-4e45-a405-cc87dc9094da" containerName="init" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.386438 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="995d0c57-db1c-4e45-a405-cc87dc9094da" containerName="init" Jan 21 13:26:39 crc kubenswrapper[4765]: E0121 13:26:39.386464 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2757aad1-8460-4e56-a626-3b4e332dcc91" containerName="nova-scheduler-scheduler" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.386473 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="2757aad1-8460-4e56-a626-3b4e332dcc91" containerName="nova-scheduler-scheduler" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.386687 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="995d0c57-db1c-4e45-a405-cc87dc9094da" containerName="dnsmasq-dns" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.386718 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ea33372-7a63-416b-a934-2f938cf0a212" containerName="nova-manage" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.386730 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="2757aad1-8460-4e56-a626-3b4e332dcc91" containerName="nova-scheduler-scheduler" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.387643 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.390727 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.395663 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.449395 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a509b9-a443-47bc-b693-4faa2e417ce8-config-data\") pod \"nova-scheduler-0\" (UID: \"f1a509b9-a443-47bc-b693-4faa2e417ce8\") " pod="openstack/nova-scheduler-0" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.449503 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97btv\" (UniqueName: \"kubernetes.io/projected/f1a509b9-a443-47bc-b693-4faa2e417ce8-kube-api-access-97btv\") pod \"nova-scheduler-0\" (UID: \"f1a509b9-a443-47bc-b693-4faa2e417ce8\") " pod="openstack/nova-scheduler-0" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.449577 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a509b9-a443-47bc-b693-4faa2e417ce8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f1a509b9-a443-47bc-b693-4faa2e417ce8\") " pod="openstack/nova-scheduler-0" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.551755 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a509b9-a443-47bc-b693-4faa2e417ce8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f1a509b9-a443-47bc-b693-4faa2e417ce8\") " pod="openstack/nova-scheduler-0" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.551865 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a509b9-a443-47bc-b693-4faa2e417ce8-config-data\") pod \"nova-scheduler-0\" (UID: \"f1a509b9-a443-47bc-b693-4faa2e417ce8\") " pod="openstack/nova-scheduler-0" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.551936 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97btv\" (UniqueName: \"kubernetes.io/projected/f1a509b9-a443-47bc-b693-4faa2e417ce8-kube-api-access-97btv\") pod \"nova-scheduler-0\" (UID: \"f1a509b9-a443-47bc-b693-4faa2e417ce8\") " pod="openstack/nova-scheduler-0" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.556702 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a509b9-a443-47bc-b693-4faa2e417ce8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f1a509b9-a443-47bc-b693-4faa2e417ce8\") " pod="openstack/nova-scheduler-0" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.565783 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a509b9-a443-47bc-b693-4faa2e417ce8-config-data\") pod \"nova-scheduler-0\" (UID: \"f1a509b9-a443-47bc-b693-4faa2e417ce8\") " pod="openstack/nova-scheduler-0" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.574856 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97btv\" (UniqueName: \"kubernetes.io/projected/f1a509b9-a443-47bc-b693-4faa2e417ce8-kube-api-access-97btv\") pod \"nova-scheduler-0\" (UID: \"f1a509b9-a443-47bc-b693-4faa2e417ce8\") " pod="openstack/nova-scheduler-0" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.623827 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2757aad1-8460-4e56-a626-3b4e332dcc91" path="/var/lib/kubelet/pods/2757aad1-8460-4e56-a626-3b4e332dcc91/volumes" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.699788 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:42412->10.217.0.197:8775: read: connection reset by peer" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.699913 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:42414->10.217.0.197:8775: read: connection reset by peer" Jan 21 13:26:39 crc kubenswrapper[4765]: I0121 13:26:39.709827 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.212178 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 13:26:40 crc kubenswrapper[4765]: W0121 13:26:40.215948 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1a509b9_a443_47bc_b693_4faa2e417ce8.slice/crio-1069fb5722334b48edce53f92842fa003c3f92c132a0fbb4ef53aff739129293 WatchSource:0}: Error finding container 1069fb5722334b48edce53f92842fa003c3f92c132a0fbb4ef53aff739129293: Status 404 returned error can't find the container with id 1069fb5722334b48edce53f92842fa003c3f92c132a0fbb4ef53aff739129293 Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.239733 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.271286 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-combined-ca-bundle\") pod \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.271678 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snnp5\" (UniqueName: \"kubernetes.io/projected/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-kube-api-access-snnp5\") pod \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.271816 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-logs\") pod \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.271993 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-nova-metadata-tls-certs\") pod \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.273424 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-config-data\") pod \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\" (UID: \"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2\") " Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.275364 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-logs" (OuterVolumeSpecName: "logs") pod "de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" (UID: "de0b50ba-0ce9-4f4f-9170-a74ed5b041c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.309477 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.311163 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2","Type":"ContainerDied","Data":"96731ad47a3de7ff47a292c13eac49da838f62f9ec25fad2ace6243acd589a62"} Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.311314 4765 scope.go:117] "RemoveContainer" containerID="96731ad47a3de7ff47a292c13eac49da838f62f9ec25fad2ace6243acd589a62" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.317412 4765 generic.go:334] "Generic (PLEG): container finished" podID="de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" containerID="96731ad47a3de7ff47a292c13eac49da838f62f9ec25fad2ace6243acd589a62" exitCode=0 Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.317539 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de0b50ba-0ce9-4f4f-9170-a74ed5b041c2","Type":"ContainerDied","Data":"9bbdcc948b800ba233274d64ecb6e83ad2359fecfaedc4ca6d1b95f3997abe2b"} Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.324702 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-kube-api-access-snnp5" (OuterVolumeSpecName: "kube-api-access-snnp5") pod "de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" (UID: "de0b50ba-0ce9-4f4f-9170-a74ed5b041c2"). InnerVolumeSpecName "kube-api-access-snnp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.338589 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" (UID: "de0b50ba-0ce9-4f4f-9170-a74ed5b041c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.340177 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f1a509b9-a443-47bc-b693-4faa2e417ce8","Type":"ContainerStarted","Data":"1069fb5722334b48edce53f92842fa003c3f92c132a0fbb4ef53aff739129293"} Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.344394 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-config-data" (OuterVolumeSpecName: "config-data") pod "de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" (UID: "de0b50ba-0ce9-4f4f-9170-a74ed5b041c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.360715 4765 scope.go:117] "RemoveContainer" containerID="b226945c0741ed2e8aa7fb1c4705ec73827e6dfd6aa7d242776b766bf1feb4f2" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.376811 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.376848 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.376861 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snnp5\" (UniqueName: \"kubernetes.io/projected/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-kube-api-access-snnp5\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.376872 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.378874 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" (UID: "de0b50ba-0ce9-4f4f-9170-a74ed5b041c2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.401553 4765 scope.go:117] "RemoveContainer" containerID="96731ad47a3de7ff47a292c13eac49da838f62f9ec25fad2ace6243acd589a62" Jan 21 13:26:40 crc kubenswrapper[4765]: E0121 13:26:40.402596 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96731ad47a3de7ff47a292c13eac49da838f62f9ec25fad2ace6243acd589a62\": container with ID starting with 96731ad47a3de7ff47a292c13eac49da838f62f9ec25fad2ace6243acd589a62 not found: ID does not exist" containerID="96731ad47a3de7ff47a292c13eac49da838f62f9ec25fad2ace6243acd589a62" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.402640 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96731ad47a3de7ff47a292c13eac49da838f62f9ec25fad2ace6243acd589a62"} err="failed to get container status \"96731ad47a3de7ff47a292c13eac49da838f62f9ec25fad2ace6243acd589a62\": rpc error: code = NotFound desc = could not find container \"96731ad47a3de7ff47a292c13eac49da838f62f9ec25fad2ace6243acd589a62\": container with ID starting with 96731ad47a3de7ff47a292c13eac49da838f62f9ec25fad2ace6243acd589a62 not found: ID does not exist" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.402671 4765 scope.go:117] "RemoveContainer" containerID="b226945c0741ed2e8aa7fb1c4705ec73827e6dfd6aa7d242776b766bf1feb4f2" Jan 21 13:26:40 crc kubenswrapper[4765]: E0121 13:26:40.403145 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b226945c0741ed2e8aa7fb1c4705ec73827e6dfd6aa7d242776b766bf1feb4f2\": container with ID starting with b226945c0741ed2e8aa7fb1c4705ec73827e6dfd6aa7d242776b766bf1feb4f2 not found: ID does not exist" containerID="b226945c0741ed2e8aa7fb1c4705ec73827e6dfd6aa7d242776b766bf1feb4f2" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.403191 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b226945c0741ed2e8aa7fb1c4705ec73827e6dfd6aa7d242776b766bf1feb4f2"} err="failed to get container status \"b226945c0741ed2e8aa7fb1c4705ec73827e6dfd6aa7d242776b766bf1feb4f2\": rpc error: code = NotFound desc = could not find container \"b226945c0741ed2e8aa7fb1c4705ec73827e6dfd6aa7d242776b766bf1feb4f2\": container with ID starting with b226945c0741ed2e8aa7fb1c4705ec73827e6dfd6aa7d242776b766bf1feb4f2 not found: ID does not exist" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.478848 4765 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.653716 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.663840 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.678386 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:26:40 crc kubenswrapper[4765]: E0121 13:26:40.678953 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" containerName="nova-metadata-metadata" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.678979 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" containerName="nova-metadata-metadata" Jan 21 13:26:40 crc kubenswrapper[4765]: E0121 13:26:40.679002 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" containerName="nova-metadata-log" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.679010 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" containerName="nova-metadata-log" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.679271 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" containerName="nova-metadata-log" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.679295 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" containerName="nova-metadata-metadata" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.680508 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.686052 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.686111 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.708552 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.782705 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qcwx\" (UniqueName: \"kubernetes.io/projected/3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa-kube-api-access-8qcwx\") pod \"nova-metadata-0\" (UID: \"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa\") " pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.782750 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa\") " pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.782795 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa-config-data\") pod \"nova-metadata-0\" (UID: \"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa\") " pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.782893 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa\") " pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.782934 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa-logs\") pod \"nova-metadata-0\" (UID: \"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa\") " pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.883907 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qcwx\" (UniqueName: \"kubernetes.io/projected/3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa-kube-api-access-8qcwx\") pod \"nova-metadata-0\" (UID: \"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa\") " pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.883975 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa\") " pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.884040 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa-config-data\") pod \"nova-metadata-0\" (UID: \"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa\") " pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.884127 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa\") " pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.884777 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa-logs\") pod \"nova-metadata-0\" (UID: \"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa\") " pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.885338 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa-logs\") pod \"nova-metadata-0\" (UID: \"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa\") " pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.896236 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa\") " pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.896482 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa\") " pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.897698 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa-config-data\") pod \"nova-metadata-0\" (UID: \"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa\") " pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.903432 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qcwx\" (UniqueName: \"kubernetes.io/projected/3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa-kube-api-access-8qcwx\") pod \"nova-metadata-0\" (UID: \"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa\") " pod="openstack/nova-metadata-0" Jan 21 13:26:40 crc kubenswrapper[4765]: I0121 13:26:40.975446 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f6k58" podUID="4762a010-6fab-4ee6-bbb8-f5d6669b079a" containerName="registry-server" probeResult="failure" output=< Jan 21 13:26:40 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Jan 21 13:26:40 crc kubenswrapper[4765]: > Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.000162 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.359042 4765 generic.go:334] "Generic (PLEG): container finished" podID="037fd778-55f2-416b-aa29-b74bd3176070" containerID="c75e5c74123ab5f20f4043999b3ff0023d004fd87cc64c920bf259c56c770908" exitCode=0 Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.359110 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"037fd778-55f2-416b-aa29-b74bd3176070","Type":"ContainerDied","Data":"c75e5c74123ab5f20f4043999b3ff0023d004fd87cc64c920bf259c56c770908"} Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.361181 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f1a509b9-a443-47bc-b693-4faa2e417ce8","Type":"ContainerStarted","Data":"a3a2998b937ac2844322c79d6ea9ce658a5f9a9d0bc8af4278ded95857357689"} Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.389226 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.38918982 podStartE2EDuration="2.38918982s" podCreationTimestamp="2026-01-21 13:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:26:41.378458178 +0000 UTC m=+1462.396184000" watchObservedRunningTime="2026-01-21 13:26:41.38918982 +0000 UTC m=+1462.406915642" Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.397313 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.498531 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/037fd778-55f2-416b-aa29-b74bd3176070-logs\") pod \"037fd778-55f2-416b-aa29-b74bd3176070\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.498697 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58nfg\" (UniqueName: \"kubernetes.io/projected/037fd778-55f2-416b-aa29-b74bd3176070-kube-api-access-58nfg\") pod \"037fd778-55f2-416b-aa29-b74bd3176070\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.498750 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-public-tls-certs\") pod \"037fd778-55f2-416b-aa29-b74bd3176070\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.498823 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-combined-ca-bundle\") pod \"037fd778-55f2-416b-aa29-b74bd3176070\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.498853 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-config-data\") pod \"037fd778-55f2-416b-aa29-b74bd3176070\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.498932 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-internal-tls-certs\") pod \"037fd778-55f2-416b-aa29-b74bd3176070\" (UID: \"037fd778-55f2-416b-aa29-b74bd3176070\") " Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.499142 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/037fd778-55f2-416b-aa29-b74bd3176070-logs" (OuterVolumeSpecName: "logs") pod "037fd778-55f2-416b-aa29-b74bd3176070" (UID: "037fd778-55f2-416b-aa29-b74bd3176070"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.499551 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/037fd778-55f2-416b-aa29-b74bd3176070-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.507140 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/037fd778-55f2-416b-aa29-b74bd3176070-kube-api-access-58nfg" (OuterVolumeSpecName: "kube-api-access-58nfg") pod "037fd778-55f2-416b-aa29-b74bd3176070" (UID: "037fd778-55f2-416b-aa29-b74bd3176070"). InnerVolumeSpecName "kube-api-access-58nfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.530966 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "037fd778-55f2-416b-aa29-b74bd3176070" (UID: "037fd778-55f2-416b-aa29-b74bd3176070"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.536930 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-config-data" (OuterVolumeSpecName: "config-data") pod "037fd778-55f2-416b-aa29-b74bd3176070" (UID: "037fd778-55f2-416b-aa29-b74bd3176070"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.564626 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "037fd778-55f2-416b-aa29-b74bd3176070" (UID: "037fd778-55f2-416b-aa29-b74bd3176070"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.572655 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "037fd778-55f2-416b-aa29-b74bd3176070" (UID: "037fd778-55f2-416b-aa29-b74bd3176070"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.604559 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.607642 4765 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.611612 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58nfg\" (UniqueName: \"kubernetes.io/projected/037fd778-55f2-416b-aa29-b74bd3176070-kube-api-access-58nfg\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.611698 4765 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.611755 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.611877 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037fd778-55f2-416b-aa29-b74bd3176070-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:41 crc kubenswrapper[4765]: I0121 13:26:41.630161 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de0b50ba-0ce9-4f4f-9170-a74ed5b041c2" path="/var/lib/kubelet/pods/de0b50ba-0ce9-4f4f-9170-a74ed5b041c2/volumes" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.377190 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa","Type":"ContainerStarted","Data":"87f191992717bbe4750db6b24d6642fcb649f2e8817a03ba9083ac03a4609ab7"} Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.377576 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa","Type":"ContainerStarted","Data":"630f2762cc9f3bd86779f33820e086fbae9b490214265f1dc18ec08d476c2c8b"} Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.377594 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa","Type":"ContainerStarted","Data":"9c6153a34064b3e2270442436ade30e5186fdeb2500b42ac6f5956e2107ffcc8"} Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.379640 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.379639 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"037fd778-55f2-416b-aa29-b74bd3176070","Type":"ContainerDied","Data":"70713bdd2d5978f955ccbf1dc90a55716c1caa44c018e42afd65a61827881ea9"} Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.379880 4765 scope.go:117] "RemoveContainer" containerID="c75e5c74123ab5f20f4043999b3ff0023d004fd87cc64c920bf259c56c770908" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.406301 4765 scope.go:117] "RemoveContainer" containerID="0d2bd56c1b4b3ce981594cdbc5b6f8b85d4e492b0fe2d166d15409c7f93ce572" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.406773 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.406750915 podStartE2EDuration="2.406750915s" podCreationTimestamp="2026-01-21 13:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:26:42.402960995 +0000 UTC m=+1463.420686817" watchObservedRunningTime="2026-01-21 13:26:42.406750915 +0000 UTC m=+1463.424476737" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.433606 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.450978 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.465513 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 13:26:42 crc kubenswrapper[4765]: E0121 13:26:42.465962 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037fd778-55f2-416b-aa29-b74bd3176070" containerName="nova-api-api" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.465977 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="037fd778-55f2-416b-aa29-b74bd3176070" containerName="nova-api-api" Jan 21 13:26:42 crc kubenswrapper[4765]: E0121 13:26:42.466022 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037fd778-55f2-416b-aa29-b74bd3176070" containerName="nova-api-log" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.466028 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="037fd778-55f2-416b-aa29-b74bd3176070" containerName="nova-api-log" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.466506 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="037fd778-55f2-416b-aa29-b74bd3176070" containerName="nova-api-log" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.466528 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="037fd778-55f2-416b-aa29-b74bd3176070" containerName="nova-api-api" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.467556 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.471421 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.471733 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.476778 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.485803 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.541809 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvcz9\" (UniqueName: \"kubernetes.io/projected/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-kube-api-access-hvcz9\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.542016 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.542088 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-public-tls-certs\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.542267 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.542320 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-config-data\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.542356 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-logs\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.644301 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvcz9\" (UniqueName: \"kubernetes.io/projected/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-kube-api-access-hvcz9\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.644380 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.644402 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-public-tls-certs\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.644455 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-config-data\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.644477 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.644505 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-logs\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.647039 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-logs\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.651540 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-public-tls-certs\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.651974 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.652596 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-config-data\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.654369 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.666050 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvcz9\" (UniqueName: \"kubernetes.io/projected/e6ce4b6e-90fe-41ba-a3e8-15fc98276798-kube-api-access-hvcz9\") pod \"nova-api-0\" (UID: \"e6ce4b6e-90fe-41ba-a3e8-15fc98276798\") " pod="openstack/nova-api-0" Jan 21 13:26:42 crc kubenswrapper[4765]: I0121 13:26:42.798636 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 13:26:43 crc kubenswrapper[4765]: I0121 13:26:43.268450 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 13:26:43 crc kubenswrapper[4765]: I0121 13:26:43.280836 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6558674dbd-lct5s" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 21 13:26:43 crc kubenswrapper[4765]: I0121 13:26:43.280989 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:26:43 crc kubenswrapper[4765]: I0121 13:26:43.400352 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6ce4b6e-90fe-41ba-a3e8-15fc98276798","Type":"ContainerStarted","Data":"d6db754e5ec8beaf5459abcd787d337a46807d4264ccbedeba6356a15c514c30"} Jan 21 13:26:43 crc kubenswrapper[4765]: I0121 13:26:43.642451 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="037fd778-55f2-416b-aa29-b74bd3176070" path="/var/lib/kubelet/pods/037fd778-55f2-416b-aa29-b74bd3176070/volumes" Jan 21 13:26:44 crc kubenswrapper[4765]: I0121 13:26:44.411330 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6ce4b6e-90fe-41ba-a3e8-15fc98276798","Type":"ContainerStarted","Data":"59596f61299ff3e5126f1e444335d591c5971ee6a260e894086752a6a60907a9"} Jan 21 13:26:44 crc kubenswrapper[4765]: I0121 13:26:44.411662 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e6ce4b6e-90fe-41ba-a3e8-15fc98276798","Type":"ContainerStarted","Data":"315f926a870e4d434acd6990348867b946e997703f7c733c0c43ee2dd3d1dd4c"} Jan 21 13:26:44 crc kubenswrapper[4765]: I0121 13:26:44.440947 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.440929576 podStartE2EDuration="2.440929576s" podCreationTimestamp="2026-01-21 13:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:26:44.432965324 +0000 UTC m=+1465.450691166" watchObservedRunningTime="2026-01-21 13:26:44.440929576 +0000 UTC m=+1465.458655388" Jan 21 13:26:44 crc kubenswrapper[4765]: I0121 13:26:44.446033 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:26:44 crc kubenswrapper[4765]: I0121 13:26:44.446089 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:26:44 crc kubenswrapper[4765]: I0121 13:26:44.446134 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:26:44 crc kubenswrapper[4765]: I0121 13:26:44.446905 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c509a513e1ebf6d2d06160d429b88c481004be78e418699ef3864eb908e3f4c"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:26:44 crc kubenswrapper[4765]: I0121 13:26:44.446960 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://6c509a513e1ebf6d2d06160d429b88c481004be78e418699ef3864eb908e3f4c" gracePeriod=600 Jan 21 13:26:44 crc kubenswrapper[4765]: I0121 13:26:44.710232 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 13:26:45 crc kubenswrapper[4765]: I0121 13:26:45.422986 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="6c509a513e1ebf6d2d06160d429b88c481004be78e418699ef3864eb908e3f4c" exitCode=0 Jan 21 13:26:45 crc kubenswrapper[4765]: I0121 13:26:45.423080 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"6c509a513e1ebf6d2d06160d429b88c481004be78e418699ef3864eb908e3f4c"} Jan 21 13:26:45 crc kubenswrapper[4765]: I0121 13:26:45.424371 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0"} Jan 21 13:26:45 crc kubenswrapper[4765]: I0121 13:26:45.424436 4765 scope.go:117] "RemoveContainer" containerID="d6699bbbe2d11832c001ff2e320299357488d5335ab1941c1de1fb9e99aec3a1" Jan 21 13:26:46 crc kubenswrapper[4765]: I0121 13:26:46.000867 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 13:26:46 crc kubenswrapper[4765]: I0121 13:26:46.001419 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 13:26:49 crc kubenswrapper[4765]: I0121 13:26:49.710124 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 13:26:49 crc kubenswrapper[4765]: I0121 13:26:49.745525 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.013343 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.100104 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.228861 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.320785 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/074ae613-bc7f-4443-abdb-7010b6054997-scripts\") pod \"074ae613-bc7f-4443-abdb-7010b6054997\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.321822 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-combined-ca-bundle\") pod \"074ae613-bc7f-4443-abdb-7010b6054997\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.321916 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/074ae613-bc7f-4443-abdb-7010b6054997-logs\") pod \"074ae613-bc7f-4443-abdb-7010b6054997\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.321975 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf8xn\" (UniqueName: \"kubernetes.io/projected/074ae613-bc7f-4443-abdb-7010b6054997-kube-api-access-lf8xn\") pod \"074ae613-bc7f-4443-abdb-7010b6054997\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.322058 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-horizon-secret-key\") pod \"074ae613-bc7f-4443-abdb-7010b6054997\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.322117 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-horizon-tls-certs\") pod \"074ae613-bc7f-4443-abdb-7010b6054997\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.322142 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/074ae613-bc7f-4443-abdb-7010b6054997-config-data\") pod \"074ae613-bc7f-4443-abdb-7010b6054997\" (UID: \"074ae613-bc7f-4443-abdb-7010b6054997\") " Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.323144 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/074ae613-bc7f-4443-abdb-7010b6054997-logs" (OuterVolumeSpecName: "logs") pod "074ae613-bc7f-4443-abdb-7010b6054997" (UID: "074ae613-bc7f-4443-abdb-7010b6054997"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.327356 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "074ae613-bc7f-4443-abdb-7010b6054997" (UID: "074ae613-bc7f-4443-abdb-7010b6054997"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.339566 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/074ae613-bc7f-4443-abdb-7010b6054997-kube-api-access-lf8xn" (OuterVolumeSpecName: "kube-api-access-lf8xn") pod "074ae613-bc7f-4443-abdb-7010b6054997" (UID: "074ae613-bc7f-4443-abdb-7010b6054997"). InnerVolumeSpecName "kube-api-access-lf8xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.359626 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "074ae613-bc7f-4443-abdb-7010b6054997" (UID: "074ae613-bc7f-4443-abdb-7010b6054997"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.362292 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/074ae613-bc7f-4443-abdb-7010b6054997-scripts" (OuterVolumeSpecName: "scripts") pod "074ae613-bc7f-4443-abdb-7010b6054997" (UID: "074ae613-bc7f-4443-abdb-7010b6054997"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.363466 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/074ae613-bc7f-4443-abdb-7010b6054997-config-data" (OuterVolumeSpecName: "config-data") pod "074ae613-bc7f-4443-abdb-7010b6054997" (UID: "074ae613-bc7f-4443-abdb-7010b6054997"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.398520 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "074ae613-bc7f-4443-abdb-7010b6054997" (UID: "074ae613-bc7f-4443-abdb-7010b6054997"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.425418 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.425463 4765 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/074ae613-bc7f-4443-abdb-7010b6054997-logs\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.425478 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf8xn\" (UniqueName: \"kubernetes.io/projected/074ae613-bc7f-4443-abdb-7010b6054997-kube-api-access-lf8xn\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.425490 4765 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.425500 4765 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/074ae613-bc7f-4443-abdb-7010b6054997-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.425508 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/074ae613-bc7f-4443-abdb-7010b6054997-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.425516 4765 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/074ae613-bc7f-4443-abdb-7010b6054997-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.476244 4765 generic.go:334] "Generic (PLEG): container finished" podID="074ae613-bc7f-4443-abdb-7010b6054997" containerID="80bf8f8075aaafb1737281da7be1eba64cc3312c18d9db5a1ce9e20ad270bd85" exitCode=137 Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.477198 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6558674dbd-lct5s" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.477268 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6558674dbd-lct5s" event={"ID":"074ae613-bc7f-4443-abdb-7010b6054997","Type":"ContainerDied","Data":"80bf8f8075aaafb1737281da7be1eba64cc3312c18d9db5a1ce9e20ad270bd85"} Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.477291 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6558674dbd-lct5s" event={"ID":"074ae613-bc7f-4443-abdb-7010b6054997","Type":"ContainerDied","Data":"c94eb348928801014ccf9c915bc637093a81ba6b2b4e7703298b64c3fa0a3b4c"} Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.477306 4765 scope.go:117] "RemoveContainer" containerID="689cae05dcc0e9b9b0adda9d542e5c2b2db33884367706410ddf5bee650aba60" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.520939 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.523287 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6558674dbd-lct5s"] Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.543576 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6558674dbd-lct5s"] Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.756561 4765 scope.go:117] "RemoveContainer" containerID="80bf8f8075aaafb1737281da7be1eba64cc3312c18d9db5a1ce9e20ad270bd85" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.789361 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f6k58"] Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.825439 4765 scope.go:117] "RemoveContainer" containerID="689cae05dcc0e9b9b0adda9d542e5c2b2db33884367706410ddf5bee650aba60" Jan 21 13:26:50 crc kubenswrapper[4765]: E0121 13:26:50.832083 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"689cae05dcc0e9b9b0adda9d542e5c2b2db33884367706410ddf5bee650aba60\": container with ID starting with 689cae05dcc0e9b9b0adda9d542e5c2b2db33884367706410ddf5bee650aba60 not found: ID does not exist" containerID="689cae05dcc0e9b9b0adda9d542e5c2b2db33884367706410ddf5bee650aba60" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.832141 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689cae05dcc0e9b9b0adda9d542e5c2b2db33884367706410ddf5bee650aba60"} err="failed to get container status \"689cae05dcc0e9b9b0adda9d542e5c2b2db33884367706410ddf5bee650aba60\": rpc error: code = NotFound desc = could not find container \"689cae05dcc0e9b9b0adda9d542e5c2b2db33884367706410ddf5bee650aba60\": container with ID starting with 689cae05dcc0e9b9b0adda9d542e5c2b2db33884367706410ddf5bee650aba60 not found: ID does not exist" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.832176 4765 scope.go:117] "RemoveContainer" containerID="80bf8f8075aaafb1737281da7be1eba64cc3312c18d9db5a1ce9e20ad270bd85" Jan 21 13:26:50 crc kubenswrapper[4765]: E0121 13:26:50.834058 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80bf8f8075aaafb1737281da7be1eba64cc3312c18d9db5a1ce9e20ad270bd85\": container with ID starting with 80bf8f8075aaafb1737281da7be1eba64cc3312c18d9db5a1ce9e20ad270bd85 not found: ID does not exist" containerID="80bf8f8075aaafb1737281da7be1eba64cc3312c18d9db5a1ce9e20ad270bd85" Jan 21 13:26:50 crc kubenswrapper[4765]: I0121 13:26:50.834095 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80bf8f8075aaafb1737281da7be1eba64cc3312c18d9db5a1ce9e20ad270bd85"} err="failed to get container status \"80bf8f8075aaafb1737281da7be1eba64cc3312c18d9db5a1ce9e20ad270bd85\": rpc error: code = NotFound desc = could not find container \"80bf8f8075aaafb1737281da7be1eba64cc3312c18d9db5a1ce9e20ad270bd85\": container with ID starting with 80bf8f8075aaafb1737281da7be1eba64cc3312c18d9db5a1ce9e20ad270bd85 not found: ID does not exist" Jan 21 13:26:51 crc kubenswrapper[4765]: I0121 13:26:51.001783 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 13:26:51 crc kubenswrapper[4765]: I0121 13:26:51.001831 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 13:26:51 crc kubenswrapper[4765]: I0121 13:26:51.468427 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 13:26:51 crc kubenswrapper[4765]: I0121 13:26:51.507916 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f6k58" podUID="4762a010-6fab-4ee6-bbb8-f5d6669b079a" containerName="registry-server" containerID="cri-o://33b8ad6461fb787879d86af6a94c7ef15715a17004c598cb618efb686eee07c9" gracePeriod=2 Jan 21 13:26:51 crc kubenswrapper[4765]: I0121 13:26:51.692705 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="074ae613-bc7f-4443-abdb-7010b6054997" path="/var/lib/kubelet/pods/074ae613-bc7f-4443-abdb-7010b6054997/volumes" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.020177 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.020265 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.149458 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.193472 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4762a010-6fab-4ee6-bbb8-f5d6669b079a-utilities\") pod \"4762a010-6fab-4ee6-bbb8-f5d6669b079a\" (UID: \"4762a010-6fab-4ee6-bbb8-f5d6669b079a\") " Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.193595 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt65h\" (UniqueName: \"kubernetes.io/projected/4762a010-6fab-4ee6-bbb8-f5d6669b079a-kube-api-access-kt65h\") pod \"4762a010-6fab-4ee6-bbb8-f5d6669b079a\" (UID: \"4762a010-6fab-4ee6-bbb8-f5d6669b079a\") " Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.193681 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4762a010-6fab-4ee6-bbb8-f5d6669b079a-catalog-content\") pod \"4762a010-6fab-4ee6-bbb8-f5d6669b079a\" (UID: \"4762a010-6fab-4ee6-bbb8-f5d6669b079a\") " Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.200358 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4762a010-6fab-4ee6-bbb8-f5d6669b079a-utilities" (OuterVolumeSpecName: "utilities") pod "4762a010-6fab-4ee6-bbb8-f5d6669b079a" (UID: "4762a010-6fab-4ee6-bbb8-f5d6669b079a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.229992 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4762a010-6fab-4ee6-bbb8-f5d6669b079a-kube-api-access-kt65h" (OuterVolumeSpecName: "kube-api-access-kt65h") pod "4762a010-6fab-4ee6-bbb8-f5d6669b079a" (UID: "4762a010-6fab-4ee6-bbb8-f5d6669b079a"). InnerVolumeSpecName "kube-api-access-kt65h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.296550 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4762a010-6fab-4ee6-bbb8-f5d6669b079a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.296593 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt65h\" (UniqueName: \"kubernetes.io/projected/4762a010-6fab-4ee6-bbb8-f5d6669b079a-kube-api-access-kt65h\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.348371 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4762a010-6fab-4ee6-bbb8-f5d6669b079a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4762a010-6fab-4ee6-bbb8-f5d6669b079a" (UID: "4762a010-6fab-4ee6-bbb8-f5d6669b079a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.398222 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4762a010-6fab-4ee6-bbb8-f5d6669b079a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.507382 4765 generic.go:334] "Generic (PLEG): container finished" podID="4762a010-6fab-4ee6-bbb8-f5d6669b079a" containerID="33b8ad6461fb787879d86af6a94c7ef15715a17004c598cb618efb686eee07c9" exitCode=0 Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.507430 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6k58" event={"ID":"4762a010-6fab-4ee6-bbb8-f5d6669b079a","Type":"ContainerDied","Data":"33b8ad6461fb787879d86af6a94c7ef15715a17004c598cb618efb686eee07c9"} Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.507465 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6k58" event={"ID":"4762a010-6fab-4ee6-bbb8-f5d6669b079a","Type":"ContainerDied","Data":"0064007af1e654f5f44491c2f96455c7a442eb78cc475d4dd66c6a9e5fa589b6"} Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.507485 4765 scope.go:117] "RemoveContainer" containerID="33b8ad6461fb787879d86af6a94c7ef15715a17004c598cb618efb686eee07c9" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.507635 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f6k58" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.538849 4765 scope.go:117] "RemoveContainer" containerID="b3ec7eaba2862341ea5304aeef0631108a71b35b1f4ceb54dc3d993b87264ae4" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.554141 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f6k58"] Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.563770 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f6k58"] Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.577828 4765 scope.go:117] "RemoveContainer" containerID="f648fd6aa372898604c665640f08dd830134e641fc6339cc6723c4d2b33a9a20" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.626648 4765 scope.go:117] "RemoveContainer" containerID="33b8ad6461fb787879d86af6a94c7ef15715a17004c598cb618efb686eee07c9" Jan 21 13:26:52 crc kubenswrapper[4765]: E0121 13:26:52.628048 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33b8ad6461fb787879d86af6a94c7ef15715a17004c598cb618efb686eee07c9\": container with ID starting with 33b8ad6461fb787879d86af6a94c7ef15715a17004c598cb618efb686eee07c9 not found: ID does not exist" containerID="33b8ad6461fb787879d86af6a94c7ef15715a17004c598cb618efb686eee07c9" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.628089 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33b8ad6461fb787879d86af6a94c7ef15715a17004c598cb618efb686eee07c9"} err="failed to get container status \"33b8ad6461fb787879d86af6a94c7ef15715a17004c598cb618efb686eee07c9\": rpc error: code = NotFound desc = could not find container \"33b8ad6461fb787879d86af6a94c7ef15715a17004c598cb618efb686eee07c9\": container with ID starting with 33b8ad6461fb787879d86af6a94c7ef15715a17004c598cb618efb686eee07c9 not found: ID does not exist" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.628120 4765 scope.go:117] "RemoveContainer" containerID="b3ec7eaba2862341ea5304aeef0631108a71b35b1f4ceb54dc3d993b87264ae4" Jan 21 13:26:52 crc kubenswrapper[4765]: E0121 13:26:52.628633 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ec7eaba2862341ea5304aeef0631108a71b35b1f4ceb54dc3d993b87264ae4\": container with ID starting with b3ec7eaba2862341ea5304aeef0631108a71b35b1f4ceb54dc3d993b87264ae4 not found: ID does not exist" containerID="b3ec7eaba2862341ea5304aeef0631108a71b35b1f4ceb54dc3d993b87264ae4" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.628664 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ec7eaba2862341ea5304aeef0631108a71b35b1f4ceb54dc3d993b87264ae4"} err="failed to get container status \"b3ec7eaba2862341ea5304aeef0631108a71b35b1f4ceb54dc3d993b87264ae4\": rpc error: code = NotFound desc = could not find container \"b3ec7eaba2862341ea5304aeef0631108a71b35b1f4ceb54dc3d993b87264ae4\": container with ID starting with b3ec7eaba2862341ea5304aeef0631108a71b35b1f4ceb54dc3d993b87264ae4 not found: ID does not exist" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.628685 4765 scope.go:117] "RemoveContainer" containerID="f648fd6aa372898604c665640f08dd830134e641fc6339cc6723c4d2b33a9a20" Jan 21 13:26:52 crc kubenswrapper[4765]: E0121 13:26:52.629204 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f648fd6aa372898604c665640f08dd830134e641fc6339cc6723c4d2b33a9a20\": container with ID starting with f648fd6aa372898604c665640f08dd830134e641fc6339cc6723c4d2b33a9a20 not found: ID does not exist" containerID="f648fd6aa372898604c665640f08dd830134e641fc6339cc6723c4d2b33a9a20" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.629299 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f648fd6aa372898604c665640f08dd830134e641fc6339cc6723c4d2b33a9a20"} err="failed to get container status \"f648fd6aa372898604c665640f08dd830134e641fc6339cc6723c4d2b33a9a20\": rpc error: code = NotFound desc = could not find container \"f648fd6aa372898604c665640f08dd830134e641fc6339cc6723c4d2b33a9a20\": container with ID starting with f648fd6aa372898604c665640f08dd830134e641fc6339cc6723c4d2b33a9a20 not found: ID does not exist" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.799568 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 13:26:52 crc kubenswrapper[4765]: I0121 13:26:52.800479 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 13:26:53 crc kubenswrapper[4765]: I0121 13:26:53.630466 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4762a010-6fab-4ee6-bbb8-f5d6669b079a" path="/var/lib/kubelet/pods/4762a010-6fab-4ee6-bbb8-f5d6669b079a/volumes" Jan 21 13:26:53 crc kubenswrapper[4765]: I0121 13:26:53.812532 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e6ce4b6e-90fe-41ba-a3e8-15fc98276798" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 13:26:53 crc kubenswrapper[4765]: I0121 13:26:53.812549 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e6ce4b6e-90fe-41ba-a3e8-15fc98276798" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 13:27:01 crc kubenswrapper[4765]: I0121 13:27:01.008502 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 13:27:01 crc kubenswrapper[4765]: I0121 13:27:01.015030 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 13:27:01 crc kubenswrapper[4765]: I0121 13:27:01.021895 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 13:27:01 crc kubenswrapper[4765]: I0121 13:27:01.597378 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 13:27:02 crc kubenswrapper[4765]: I0121 13:27:02.806660 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 13:27:02 crc kubenswrapper[4765]: I0121 13:27:02.807741 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 13:27:02 crc kubenswrapper[4765]: I0121 13:27:02.808346 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 13:27:02 crc kubenswrapper[4765]: I0121 13:27:02.808372 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 13:27:02 crc kubenswrapper[4765]: I0121 13:27:02.816579 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 13:27:02 crc kubenswrapper[4765]: I0121 13:27:02.820525 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 13:27:12 crc kubenswrapper[4765]: I0121 13:27:12.263605 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:27:13 crc kubenswrapper[4765]: I0121 13:27:13.141921 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:27:17 crc kubenswrapper[4765]: I0121 13:27:17.543639 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="4d783178-0ea7-4643-802f-d56722e1df7d" containerName="rabbitmq" containerID="cri-o://85748d994c8b907b866b52a387ecb62d3fb2d52f35909390b09cc0acf091d06e" gracePeriod=604796 Jan 21 13:27:17 crc kubenswrapper[4765]: I0121 13:27:17.749267 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="054275fd-f5b9-4326-98a3-af2cc1d76c17" containerName="rabbitmq" containerID="cri-o://86eb1244c7d3b1abc5524f76b3df354eda942ce6e12f45e000ae681bccd46da4" gracePeriod=604795 Jan 21 13:27:17 crc kubenswrapper[4765]: I0121 13:27:17.918074 4765 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="4d783178-0ea7-4643-802f-d56722e1df7d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 21 13:27:23 crc kubenswrapper[4765]: I0121 13:27:23.802895 4765 generic.go:334] "Generic (PLEG): container finished" podID="4d783178-0ea7-4643-802f-d56722e1df7d" containerID="85748d994c8b907b866b52a387ecb62d3fb2d52f35909390b09cc0acf091d06e" exitCode=0 Jan 21 13:27:23 crc kubenswrapper[4765]: I0121 13:27:23.802971 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d783178-0ea7-4643-802f-d56722e1df7d","Type":"ContainerDied","Data":"85748d994c8b907b866b52a387ecb62d3fb2d52f35909390b09cc0acf091d06e"} Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.181742 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.267518 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-confd\") pod \"4d783178-0ea7-4643-802f-d56722e1df7d\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.267576 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-config-data\") pod \"4d783178-0ea7-4643-802f-d56722e1df7d\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.267608 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"4d783178-0ea7-4643-802f-d56722e1df7d\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.267687 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-erlang-cookie\") pod \"4d783178-0ea7-4643-802f-d56722e1df7d\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.267735 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwpkl\" (UniqueName: \"kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-kube-api-access-xwpkl\") pod \"4d783178-0ea7-4643-802f-d56722e1df7d\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.267782 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-plugins-conf\") pod \"4d783178-0ea7-4643-802f-d56722e1df7d\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.267831 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-server-conf\") pod \"4d783178-0ea7-4643-802f-d56722e1df7d\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.267870 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d783178-0ea7-4643-802f-d56722e1df7d-pod-info\") pod \"4d783178-0ea7-4643-802f-d56722e1df7d\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.267897 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d783178-0ea7-4643-802f-d56722e1df7d-erlang-cookie-secret\") pod \"4d783178-0ea7-4643-802f-d56722e1df7d\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.267926 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-tls\") pod \"4d783178-0ea7-4643-802f-d56722e1df7d\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.267972 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-plugins\") pod \"4d783178-0ea7-4643-802f-d56722e1df7d\" (UID: \"4d783178-0ea7-4643-802f-d56722e1df7d\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.269424 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4d783178-0ea7-4643-802f-d56722e1df7d" (UID: "4d783178-0ea7-4643-802f-d56722e1df7d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.270360 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4d783178-0ea7-4643-802f-d56722e1df7d" (UID: "4d783178-0ea7-4643-802f-d56722e1df7d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.279426 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4d783178-0ea7-4643-802f-d56722e1df7d-pod-info" (OuterVolumeSpecName: "pod-info") pod "4d783178-0ea7-4643-802f-d56722e1df7d" (UID: "4d783178-0ea7-4643-802f-d56722e1df7d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.280005 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4d783178-0ea7-4643-802f-d56722e1df7d" (UID: "4d783178-0ea7-4643-802f-d56722e1df7d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.286700 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "4d783178-0ea7-4643-802f-d56722e1df7d" (UID: "4d783178-0ea7-4643-802f-d56722e1df7d"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.301812 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-kube-api-access-xwpkl" (OuterVolumeSpecName: "kube-api-access-xwpkl") pod "4d783178-0ea7-4643-802f-d56722e1df7d" (UID: "4d783178-0ea7-4643-802f-d56722e1df7d"). InnerVolumeSpecName "kube-api-access-xwpkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.323137 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4d783178-0ea7-4643-802f-d56722e1df7d" (UID: "4d783178-0ea7-4643-802f-d56722e1df7d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.338257 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d783178-0ea7-4643-802f-d56722e1df7d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4d783178-0ea7-4643-802f-d56722e1df7d" (UID: "4d783178-0ea7-4643-802f-d56722e1df7d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.359899 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-config-data" (OuterVolumeSpecName: "config-data") pod "4d783178-0ea7-4643-802f-d56722e1df7d" (UID: "4d783178-0ea7-4643-802f-d56722e1df7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.370029 4765 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.370056 4765 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d783178-0ea7-4643-802f-d56722e1df7d-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.370064 4765 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d783178-0ea7-4643-802f-d56722e1df7d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.370074 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.370081 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.370089 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.370108 4765 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.370120 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.370130 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwpkl\" (UniqueName: \"kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-kube-api-access-xwpkl\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.382611 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.471792 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-config-data\") pod \"054275fd-f5b9-4326-98a3-af2cc1d76c17\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.471844 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/054275fd-f5b9-4326-98a3-af2cc1d76c17-pod-info\") pod \"054275fd-f5b9-4326-98a3-af2cc1d76c17\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.471876 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-tls\") pod \"054275fd-f5b9-4326-98a3-af2cc1d76c17\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.471942 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"054275fd-f5b9-4326-98a3-af2cc1d76c17\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.471963 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/054275fd-f5b9-4326-98a3-af2cc1d76c17-erlang-cookie-secret\") pod \"054275fd-f5b9-4326-98a3-af2cc1d76c17\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.471987 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-plugins-conf\") pod \"054275fd-f5b9-4326-98a3-af2cc1d76c17\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.472014 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-server-conf\") pod \"054275fd-f5b9-4326-98a3-af2cc1d76c17\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.472070 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-erlang-cookie\") pod \"054275fd-f5b9-4326-98a3-af2cc1d76c17\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.472142 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9p49\" (UniqueName: \"kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-kube-api-access-f9p49\") pod \"054275fd-f5b9-4326-98a3-af2cc1d76c17\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.472168 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-confd\") pod \"054275fd-f5b9-4326-98a3-af2cc1d76c17\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.472203 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-plugins\") pod \"054275fd-f5b9-4326-98a3-af2cc1d76c17\" (UID: \"054275fd-f5b9-4326-98a3-af2cc1d76c17\") " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.473164 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "054275fd-f5b9-4326-98a3-af2cc1d76c17" (UID: "054275fd-f5b9-4326-98a3-af2cc1d76c17"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.475013 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "054275fd-f5b9-4326-98a3-af2cc1d76c17" (UID: "054275fd-f5b9-4326-98a3-af2cc1d76c17"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.480058 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "054275fd-f5b9-4326-98a3-af2cc1d76c17" (UID: "054275fd-f5b9-4326-98a3-af2cc1d76c17"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.495923 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/054275fd-f5b9-4326-98a3-af2cc1d76c17-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "054275fd-f5b9-4326-98a3-af2cc1d76c17" (UID: "054275fd-f5b9-4326-98a3-af2cc1d76c17"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.523945 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/054275fd-f5b9-4326-98a3-af2cc1d76c17-pod-info" (OuterVolumeSpecName: "pod-info") pod "054275fd-f5b9-4326-98a3-af2cc1d76c17" (UID: "054275fd-f5b9-4326-98a3-af2cc1d76c17"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.524200 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-kube-api-access-f9p49" (OuterVolumeSpecName: "kube-api-access-f9p49") pod "054275fd-f5b9-4326-98a3-af2cc1d76c17" (UID: "054275fd-f5b9-4326-98a3-af2cc1d76c17"). InnerVolumeSpecName "kube-api-access-f9p49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.524496 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "054275fd-f5b9-4326-98a3-af2cc1d76c17" (UID: "054275fd-f5b9-4326-98a3-af2cc1d76c17"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.524896 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "054275fd-f5b9-4326-98a3-af2cc1d76c17" (UID: "054275fd-f5b9-4326-98a3-af2cc1d76c17"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.580655 4765 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/054275fd-f5b9-4326-98a3-af2cc1d76c17-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.580700 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.580740 4765 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.580753 4765 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/054275fd-f5b9-4326-98a3-af2cc1d76c17-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.580764 4765 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.580781 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.580792 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9p49\" (UniqueName: \"kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-kube-api-access-f9p49\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.580802 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.584388 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-config-data" (OuterVolumeSpecName: "config-data") pod "054275fd-f5b9-4326-98a3-af2cc1d76c17" (UID: "054275fd-f5b9-4326-98a3-af2cc1d76c17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.594452 4765 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.606903 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-server-conf" (OuterVolumeSpecName: "server-conf") pod "4d783178-0ea7-4643-802f-d56722e1df7d" (UID: "4d783178-0ea7-4643-802f-d56722e1df7d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.640040 4765 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.655677 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4d783178-0ea7-4643-802f-d56722e1df7d" (UID: "4d783178-0ea7-4643-802f-d56722e1df7d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.667309 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-server-conf" (OuterVolumeSpecName: "server-conf") pod "054275fd-f5b9-4326-98a3-af2cc1d76c17" (UID: "054275fd-f5b9-4326-98a3-af2cc1d76c17"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.685541 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d783178-0ea7-4643-802f-d56722e1df7d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.685579 4765 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.685593 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.685603 4765 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.685618 4765 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/054275fd-f5b9-4326-98a3-af2cc1d76c17-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.685627 4765 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d783178-0ea7-4643-802f-d56722e1df7d-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.739227 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "054275fd-f5b9-4326-98a3-af2cc1d76c17" (UID: "054275fd-f5b9-4326-98a3-af2cc1d76c17"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.787356 4765 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/054275fd-f5b9-4326-98a3-af2cc1d76c17-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.812903 4765 generic.go:334] "Generic (PLEG): container finished" podID="054275fd-f5b9-4326-98a3-af2cc1d76c17" containerID="86eb1244c7d3b1abc5524f76b3df354eda942ce6e12f45e000ae681bccd46da4" exitCode=0 Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.812989 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"054275fd-f5b9-4326-98a3-af2cc1d76c17","Type":"ContainerDied","Data":"86eb1244c7d3b1abc5524f76b3df354eda942ce6e12f45e000ae681bccd46da4"} Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.813020 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"054275fd-f5b9-4326-98a3-af2cc1d76c17","Type":"ContainerDied","Data":"cecfb471a9be2aa3d7d4eb41b9fa91997f657a8a351cf92c0a3084ded3964424"} Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.813038 4765 scope.go:117] "RemoveContainer" containerID="86eb1244c7d3b1abc5524f76b3df354eda942ce6e12f45e000ae681bccd46da4" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.813176 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.829577 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d783178-0ea7-4643-802f-d56722e1df7d","Type":"ContainerDied","Data":"62b2916abb97801b71d2644df613491a0bc09cf00dd9659618717e94d7878084"} Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.829727 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.858243 4765 scope.go:117] "RemoveContainer" containerID="7dcc51364c36973f1ebc49e3c990ab016165b1bb8ac45a8169fac12e8e7360f4" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.886026 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.898011 4765 scope.go:117] "RemoveContainer" containerID="86eb1244c7d3b1abc5524f76b3df354eda942ce6e12f45e000ae681bccd46da4" Jan 21 13:27:24 crc kubenswrapper[4765]: E0121 13:27:24.898931 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86eb1244c7d3b1abc5524f76b3df354eda942ce6e12f45e000ae681bccd46da4\": container with ID starting with 86eb1244c7d3b1abc5524f76b3df354eda942ce6e12f45e000ae681bccd46da4 not found: ID does not exist" containerID="86eb1244c7d3b1abc5524f76b3df354eda942ce6e12f45e000ae681bccd46da4" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.898962 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86eb1244c7d3b1abc5524f76b3df354eda942ce6e12f45e000ae681bccd46da4"} err="failed to get container status \"86eb1244c7d3b1abc5524f76b3df354eda942ce6e12f45e000ae681bccd46da4\": rpc error: code = NotFound desc = could not find container \"86eb1244c7d3b1abc5524f76b3df354eda942ce6e12f45e000ae681bccd46da4\": container with ID starting with 86eb1244c7d3b1abc5524f76b3df354eda942ce6e12f45e000ae681bccd46da4 not found: ID does not exist" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.898984 4765 scope.go:117] "RemoveContainer" containerID="7dcc51364c36973f1ebc49e3c990ab016165b1bb8ac45a8169fac12e8e7360f4" Jan 21 13:27:24 crc kubenswrapper[4765]: E0121 13:27:24.905034 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dcc51364c36973f1ebc49e3c990ab016165b1bb8ac45a8169fac12e8e7360f4\": container with ID starting with 7dcc51364c36973f1ebc49e3c990ab016165b1bb8ac45a8169fac12e8e7360f4 not found: ID does not exist" containerID="7dcc51364c36973f1ebc49e3c990ab016165b1bb8ac45a8169fac12e8e7360f4" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.905095 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dcc51364c36973f1ebc49e3c990ab016165b1bb8ac45a8169fac12e8e7360f4"} err="failed to get container status \"7dcc51364c36973f1ebc49e3c990ab016165b1bb8ac45a8169fac12e8e7360f4\": rpc error: code = NotFound desc = could not find container \"7dcc51364c36973f1ebc49e3c990ab016165b1bb8ac45a8169fac12e8e7360f4\": container with ID starting with 7dcc51364c36973f1ebc49e3c990ab016165b1bb8ac45a8169fac12e8e7360f4 not found: ID does not exist" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.905132 4765 scope.go:117] "RemoveContainer" containerID="85748d994c8b907b866b52a387ecb62d3fb2d52f35909390b09cc0acf091d06e" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.914621 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.937540 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:27:24 crc kubenswrapper[4765]: E0121 13:27:24.937990 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="054275fd-f5b9-4326-98a3-af2cc1d76c17" containerName="rabbitmq" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938005 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="054275fd-f5b9-4326-98a3-af2cc1d76c17" containerName="rabbitmq" Jan 21 13:27:24 crc kubenswrapper[4765]: E0121 13:27:24.938030 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d783178-0ea7-4643-802f-d56722e1df7d" containerName="setup-container" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938038 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d783178-0ea7-4643-802f-d56722e1df7d" containerName="setup-container" Jan 21 13:27:24 crc kubenswrapper[4765]: E0121 13:27:24.938052 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4762a010-6fab-4ee6-bbb8-f5d6669b079a" containerName="extract-utilities" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938059 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4762a010-6fab-4ee6-bbb8-f5d6669b079a" containerName="extract-utilities" Jan 21 13:27:24 crc kubenswrapper[4765]: E0121 13:27:24.938072 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d783178-0ea7-4643-802f-d56722e1df7d" containerName="rabbitmq" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938079 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d783178-0ea7-4643-802f-d56722e1df7d" containerName="rabbitmq" Jan 21 13:27:24 crc kubenswrapper[4765]: E0121 13:27:24.938087 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon-log" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938094 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon-log" Jan 21 13:27:24 crc kubenswrapper[4765]: E0121 13:27:24.938109 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938117 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" Jan 21 13:27:24 crc kubenswrapper[4765]: E0121 13:27:24.938133 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4762a010-6fab-4ee6-bbb8-f5d6669b079a" containerName="registry-server" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938139 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4762a010-6fab-4ee6-bbb8-f5d6669b079a" containerName="registry-server" Jan 21 13:27:24 crc kubenswrapper[4765]: E0121 13:27:24.938149 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4762a010-6fab-4ee6-bbb8-f5d6669b079a" containerName="extract-content" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938156 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4762a010-6fab-4ee6-bbb8-f5d6669b079a" containerName="extract-content" Jan 21 13:27:24 crc kubenswrapper[4765]: E0121 13:27:24.938166 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="054275fd-f5b9-4326-98a3-af2cc1d76c17" containerName="setup-container" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938173 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="054275fd-f5b9-4326-98a3-af2cc1d76c17" containerName="setup-container" Jan 21 13:27:24 crc kubenswrapper[4765]: E0121 13:27:24.938195 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938204 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" Jan 21 13:27:24 crc kubenswrapper[4765]: E0121 13:27:24.938300 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938309 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938519 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938535 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938549 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="054275fd-f5b9-4326-98a3-af2cc1d76c17" containerName="rabbitmq" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938561 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon-log" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938569 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="074ae613-bc7f-4443-abdb-7010b6054997" containerName="horizon" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938583 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="4762a010-6fab-4ee6-bbb8-f5d6669b079a" containerName="registry-server" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.938595 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d783178-0ea7-4643-802f-d56722e1df7d" containerName="rabbitmq" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.939673 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.950351 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.950554 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.950675 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-t7g28" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.950789 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.950916 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.951054 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.951165 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.984751 4765 scope.go:117] "RemoveContainer" containerID="4616ef97539fc8112f0373c108ede44e8bc6f6f97bc36b1ff01a83991a083f75" Jan 21 13:27:24 crc kubenswrapper[4765]: I0121 13:27:24.990065 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.030305 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.036540 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.038260 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.044670 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.045011 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.045924 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.046088 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.046309 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nkftp" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.046724 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.049040 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.055554 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.067001 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.091885 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/997a77bd-3d32-4db3-a34d-588eb0ea88a3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.092057 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/997a77bd-3d32-4db3-a34d-588eb0ea88a3-config-data\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.092109 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm29q\" (UniqueName: \"kubernetes.io/projected/997a77bd-3d32-4db3-a34d-588eb0ea88a3-kube-api-access-wm29q\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.092144 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/997a77bd-3d32-4db3-a34d-588eb0ea88a3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.092181 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/997a77bd-3d32-4db3-a34d-588eb0ea88a3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.092298 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.092330 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/997a77bd-3d32-4db3-a34d-588eb0ea88a3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.092364 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/997a77bd-3d32-4db3-a34d-588eb0ea88a3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.092398 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/997a77bd-3d32-4db3-a34d-588eb0ea88a3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.092469 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/997a77bd-3d32-4db3-a34d-588eb0ea88a3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.092496 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/997a77bd-3d32-4db3-a34d-588eb0ea88a3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.194652 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqfmh\" (UniqueName: \"kubernetes.io/projected/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-kube-api-access-qqfmh\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.194727 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm29q\" (UniqueName: \"kubernetes.io/projected/997a77bd-3d32-4db3-a34d-588eb0ea88a3-kube-api-access-wm29q\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.194868 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/997a77bd-3d32-4db3-a34d-588eb0ea88a3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.194953 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/997a77bd-3d32-4db3-a34d-588eb0ea88a3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.194994 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.195019 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.195056 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/997a77bd-3d32-4db3-a34d-588eb0ea88a3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.195119 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.195156 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/997a77bd-3d32-4db3-a34d-588eb0ea88a3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.195200 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.195262 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/997a77bd-3d32-4db3-a34d-588eb0ea88a3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.195283 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.195335 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/997a77bd-3d32-4db3-a34d-588eb0ea88a3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.195366 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.195778 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/997a77bd-3d32-4db3-a34d-588eb0ea88a3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.195371 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/997a77bd-3d32-4db3-a34d-588eb0ea88a3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.195844 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/997a77bd-3d32-4db3-a34d-588eb0ea88a3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.195903 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.195932 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.196018 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.196068 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.196115 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.196149 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.196200 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/997a77bd-3d32-4db3-a34d-588eb0ea88a3-config-data\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.196311 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/997a77bd-3d32-4db3-a34d-588eb0ea88a3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.196916 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/997a77bd-3d32-4db3-a34d-588eb0ea88a3-config-data\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.197509 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/997a77bd-3d32-4db3-a34d-588eb0ea88a3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.200369 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/997a77bd-3d32-4db3-a34d-588eb0ea88a3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.206469 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/997a77bd-3d32-4db3-a34d-588eb0ea88a3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.209494 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/997a77bd-3d32-4db3-a34d-588eb0ea88a3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.215096 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/997a77bd-3d32-4db3-a34d-588eb0ea88a3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.221977 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm29q\" (UniqueName: \"kubernetes.io/projected/997a77bd-3d32-4db3-a34d-588eb0ea88a3-kube-api-access-wm29q\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.222552 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/997a77bd-3d32-4db3-a34d-588eb0ea88a3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.247688 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"997a77bd-3d32-4db3-a34d-588eb0ea88a3\") " pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.283775 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.298225 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.298462 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.298540 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.298661 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.299257 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.299437 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.300273 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.301070 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.301690 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.302296 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.302392 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.302537 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqfmh\" (UniqueName: \"kubernetes.io/projected/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-kube-api-access-qqfmh\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.302678 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.301618 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.300978 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.300167 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.304024 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.305915 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.308439 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.308825 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.310631 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.330364 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqfmh\" (UniqueName: \"kubernetes.io/projected/f302fd12-fe7e-455b-94f0-aafe7ddb95f2-kube-api-access-qqfmh\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.333735 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f302fd12-fe7e-455b-94f0-aafe7ddb95f2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.375531 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.633732 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="054275fd-f5b9-4326-98a3-af2cc1d76c17" path="/var/lib/kubelet/pods/054275fd-f5b9-4326-98a3-af2cc1d76c17/volumes" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.634851 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d783178-0ea7-4643-802f-d56722e1df7d" path="/var/lib/kubelet/pods/4d783178-0ea7-4643-802f-d56722e1df7d/volumes" Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.788054 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 13:27:25 crc kubenswrapper[4765]: W0121 13:27:25.806776 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod997a77bd_3d32_4db3_a34d_588eb0ea88a3.slice/crio-1d796a24bff8aa3cfaa5edae1550740c03074c19fdef8402ddee30e08febd1f1 WatchSource:0}: Error finding container 1d796a24bff8aa3cfaa5edae1550740c03074c19fdef8402ddee30e08febd1f1: Status 404 returned error can't find the container with id 1d796a24bff8aa3cfaa5edae1550740c03074c19fdef8402ddee30e08febd1f1 Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.846126 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"997a77bd-3d32-4db3-a34d-588eb0ea88a3","Type":"ContainerStarted","Data":"1d796a24bff8aa3cfaa5edae1550740c03074c19fdef8402ddee30e08febd1f1"} Jan 21 13:27:25 crc kubenswrapper[4765]: I0121 13:27:25.923502 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 13:27:25 crc kubenswrapper[4765]: W0121 13:27:25.935003 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf302fd12_fe7e_455b_94f0_aafe7ddb95f2.slice/crio-dd0d0a598bfee019e13f155659b2d040bf22b951e30ee15b981c0945858fd8d5 WatchSource:0}: Error finding container dd0d0a598bfee019e13f155659b2d040bf22b951e30ee15b981c0945858fd8d5: Status 404 returned error can't find the container with id dd0d0a598bfee019e13f155659b2d040bf22b951e30ee15b981c0945858fd8d5 Jan 21 13:27:26 crc kubenswrapper[4765]: I0121 13:27:26.857079 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f302fd12-fe7e-455b-94f0-aafe7ddb95f2","Type":"ContainerStarted","Data":"dd0d0a598bfee019e13f155659b2d040bf22b951e30ee15b981c0945858fd8d5"} Jan 21 13:27:27 crc kubenswrapper[4765]: I0121 13:27:27.865878 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"997a77bd-3d32-4db3-a34d-588eb0ea88a3","Type":"ContainerStarted","Data":"0f58a0a36bb81d8a06397bbd2e6963f2ab21b8477754fb92da636e262d961aeb"} Jan 21 13:27:27 crc kubenswrapper[4765]: I0121 13:27:27.867446 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f302fd12-fe7e-455b-94f0-aafe7ddb95f2","Type":"ContainerStarted","Data":"d9d09475d2c7ab086a3de8f4c5c6a4a9a3304818ab5feaff62106a00e5991cb6"} Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.041836 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-rcgzn"] Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.043778 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.045847 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.078332 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-rcgzn"] Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.183188 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.183337 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.183385 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.183593 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7jz6\" (UniqueName: \"kubernetes.io/projected/5551d5b9-a9a3-433c-ac02-51355ae7f086-kube-api-access-x7jz6\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.183683 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-config\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.183798 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-dns-svc\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.183913 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.287469 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.287534 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.287565 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.287627 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7jz6\" (UniqueName: \"kubernetes.io/projected/5551d5b9-a9a3-433c-ac02-51355ae7f086-kube-api-access-x7jz6\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.287660 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-config\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.287719 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-dns-svc\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.287790 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.288671 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.288683 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.288949 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-dns-svc\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.289271 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.289688 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-config\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.292821 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.314556 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7jz6\" (UniqueName: \"kubernetes.io/projected/5551d5b9-a9a3-433c-ac02-51355ae7f086-kube-api-access-x7jz6\") pod \"dnsmasq-dns-d558885bc-rcgzn\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:29 crc kubenswrapper[4765]: I0121 13:27:29.363322 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:30 crc kubenswrapper[4765]: I0121 13:27:30.006735 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-rcgzn"] Jan 21 13:27:30 crc kubenswrapper[4765]: I0121 13:27:30.950602 4765 generic.go:334] "Generic (PLEG): container finished" podID="5551d5b9-a9a3-433c-ac02-51355ae7f086" containerID="42e286e89cb0be50915e9626a864c27af7977a5d904a0679c3b5822a98155a45" exitCode=0 Jan 21 13:27:30 crc kubenswrapper[4765]: I0121 13:27:30.951030 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-rcgzn" event={"ID":"5551d5b9-a9a3-433c-ac02-51355ae7f086","Type":"ContainerDied","Data":"42e286e89cb0be50915e9626a864c27af7977a5d904a0679c3b5822a98155a45"} Jan 21 13:27:30 crc kubenswrapper[4765]: I0121 13:27:30.951087 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-rcgzn" event={"ID":"5551d5b9-a9a3-433c-ac02-51355ae7f086","Type":"ContainerStarted","Data":"567b4333485f48789bce75e0121522124ec8e884106d2dc1febc3f7f95589d97"} Jan 21 13:27:31 crc kubenswrapper[4765]: I0121 13:27:31.964050 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-rcgzn" event={"ID":"5551d5b9-a9a3-433c-ac02-51355ae7f086","Type":"ContainerStarted","Data":"22f41c27ae290e5580cc687586dab7608b21a7ac718697dc869908e90898525c"} Jan 21 13:27:31 crc kubenswrapper[4765]: I0121 13:27:31.964409 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:31 crc kubenswrapper[4765]: I0121 13:27:31.993114 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-rcgzn" podStartSLOduration=2.993094925 podStartE2EDuration="2.993094925s" podCreationTimestamp="2026-01-21 13:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:27:31.986820332 +0000 UTC m=+1513.004546174" watchObservedRunningTime="2026-01-21 13:27:31.993094925 +0000 UTC m=+1513.010820747" Jan 21 13:27:35 crc kubenswrapper[4765]: I0121 13:27:35.382088 4765 scope.go:117] "RemoveContainer" containerID="c9adc1a2fee911ee8f9ffeb7d5635bb997f41fe2d4cb3f440c91fc1c69005823" Jan 21 13:27:35 crc kubenswrapper[4765]: I0121 13:27:35.411047 4765 scope.go:117] "RemoveContainer" containerID="21ca98e9119a0330c04f2542d8cd8b5a6cc10f1ebc18d0a1425a21a9f5212956" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.366641 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.439476 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-brgf9"] Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.439793 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" podUID="bedcf2ee-ca90-440d-bc45-7022079ed9e4" containerName="dnsmasq-dns" containerID="cri-o://5343b4efc7b08cd51a85c5f59689c9ed61d251cedf7d3d181d10ada6d64d098a" gracePeriod=10 Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.594614 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-sh9vb"] Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.596711 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.669969 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-sh9vb"] Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.708386 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.708440 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-config\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.708480 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.708666 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c69zg\" (UniqueName: \"kubernetes.io/projected/8b82d059-d861-40e4-8892-ba17220d1b78-kube-api-access-c69zg\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.708697 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.708719 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.708812 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.810697 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.810973 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.810995 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-config\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.811021 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.811106 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c69zg\" (UniqueName: \"kubernetes.io/projected/8b82d059-d861-40e4-8892-ba17220d1b78-kube-api-access-c69zg\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.811126 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.811146 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.811885 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.812039 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.812729 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.812826 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.814348 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-config\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.814487 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b82d059-d861-40e4-8892-ba17220d1b78-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:39 crc kubenswrapper[4765]: I0121 13:27:39.847092 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c69zg\" (UniqueName: \"kubernetes.io/projected/8b82d059-d861-40e4-8892-ba17220d1b78-kube-api-access-c69zg\") pod \"dnsmasq-dns-6b6dc74c5-sh9vb\" (UID: \"8b82d059-d861-40e4-8892-ba17220d1b78\") " pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:39.951835 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.087026 4765 generic.go:334] "Generic (PLEG): container finished" podID="bedcf2ee-ca90-440d-bc45-7022079ed9e4" containerID="5343b4efc7b08cd51a85c5f59689c9ed61d251cedf7d3d181d10ada6d64d098a" exitCode=0 Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.087162 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" event={"ID":"bedcf2ee-ca90-440d-bc45-7022079ed9e4","Type":"ContainerDied","Data":"5343b4efc7b08cd51a85c5f59689c9ed61d251cedf7d3d181d10ada6d64d098a"} Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.165712 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.220294 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-config\") pod \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.220341 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-dns-swift-storage-0\") pod \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.220402 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-ovsdbserver-sb\") pod \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.220576 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-ovsdbserver-nb\") pod \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.220638 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-dns-svc\") pod \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.220701 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5zrw\" (UniqueName: \"kubernetes.io/projected/bedcf2ee-ca90-440d-bc45-7022079ed9e4-kube-api-access-r5zrw\") pod \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\" (UID: \"bedcf2ee-ca90-440d-bc45-7022079ed9e4\") " Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.225046 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bedcf2ee-ca90-440d-bc45-7022079ed9e4-kube-api-access-r5zrw" (OuterVolumeSpecName: "kube-api-access-r5zrw") pod "bedcf2ee-ca90-440d-bc45-7022079ed9e4" (UID: "bedcf2ee-ca90-440d-bc45-7022079ed9e4"). InnerVolumeSpecName "kube-api-access-r5zrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.285141 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bedcf2ee-ca90-440d-bc45-7022079ed9e4" (UID: "bedcf2ee-ca90-440d-bc45-7022079ed9e4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.290368 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bedcf2ee-ca90-440d-bc45-7022079ed9e4" (UID: "bedcf2ee-ca90-440d-bc45-7022079ed9e4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.291450 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-config" (OuterVolumeSpecName: "config") pod "bedcf2ee-ca90-440d-bc45-7022079ed9e4" (UID: "bedcf2ee-ca90-440d-bc45-7022079ed9e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.305838 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bedcf2ee-ca90-440d-bc45-7022079ed9e4" (UID: "bedcf2ee-ca90-440d-bc45-7022079ed9e4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.306132 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bedcf2ee-ca90-440d-bc45-7022079ed9e4" (UID: "bedcf2ee-ca90-440d-bc45-7022079ed9e4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.322874 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.322895 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.322905 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5zrw\" (UniqueName: \"kubernetes.io/projected/bedcf2ee-ca90-440d-bc45-7022079ed9e4-kube-api-access-r5zrw\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.322915 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.322923 4765 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:40 crc kubenswrapper[4765]: I0121 13:27:40.322931 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bedcf2ee-ca90-440d-bc45-7022079ed9e4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:41 crc kubenswrapper[4765]: I0121 13:27:41.097147 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" event={"ID":"bedcf2ee-ca90-440d-bc45-7022079ed9e4","Type":"ContainerDied","Data":"ad4335dc772311adc88aa36375437fffa7c008aee8ecee5af1f1e12713d041f7"} Jan 21 13:27:41 crc kubenswrapper[4765]: I0121 13:27:41.097187 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-brgf9" Jan 21 13:27:41 crc kubenswrapper[4765]: I0121 13:27:41.097481 4765 scope.go:117] "RemoveContainer" containerID="5343b4efc7b08cd51a85c5f59689c9ed61d251cedf7d3d181d10ada6d64d098a" Jan 21 13:27:41 crc kubenswrapper[4765]: I0121 13:27:41.127285 4765 scope.go:117] "RemoveContainer" containerID="0e474ad7e1494a3e602a146ea1501408fd141d6b61d6f993d2e3a325c31ee690" Jan 21 13:27:41 crc kubenswrapper[4765]: I0121 13:27:41.204400 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-brgf9"] Jan 21 13:27:41 crc kubenswrapper[4765]: I0121 13:27:41.212818 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-brgf9"] Jan 21 13:27:41 crc kubenswrapper[4765]: I0121 13:27:41.265318 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-sh9vb"] Jan 21 13:27:41 crc kubenswrapper[4765]: I0121 13:27:41.640384 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bedcf2ee-ca90-440d-bc45-7022079ed9e4" path="/var/lib/kubelet/pods/bedcf2ee-ca90-440d-bc45-7022079ed9e4/volumes" Jan 21 13:27:42 crc kubenswrapper[4765]: I0121 13:27:42.106983 4765 generic.go:334] "Generic (PLEG): container finished" podID="8b82d059-d861-40e4-8892-ba17220d1b78" containerID="15b680bb9bac51813cfdb104f4059f5ea0d131a3f213f2b3f15f6751a9644371" exitCode=0 Jan 21 13:27:42 crc kubenswrapper[4765]: I0121 13:27:42.107067 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" event={"ID":"8b82d059-d861-40e4-8892-ba17220d1b78","Type":"ContainerDied","Data":"15b680bb9bac51813cfdb104f4059f5ea0d131a3f213f2b3f15f6751a9644371"} Jan 21 13:27:42 crc kubenswrapper[4765]: I0121 13:27:42.107097 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" event={"ID":"8b82d059-d861-40e4-8892-ba17220d1b78","Type":"ContainerStarted","Data":"05451412eef204326a41e8e7b84e5d06956e3f2e446e82cb32fb05127fef162a"} Jan 21 13:27:43 crc kubenswrapper[4765]: I0121 13:27:43.119861 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" event={"ID":"8b82d059-d861-40e4-8892-ba17220d1b78","Type":"ContainerStarted","Data":"5a110bcfc9dff2fa623a3f587b48b37e6da79665cc0af4305afa276fd1b060e2"} Jan 21 13:27:43 crc kubenswrapper[4765]: I0121 13:27:43.120368 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:49 crc kubenswrapper[4765]: I0121 13:27:49.954367 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" Jan 21 13:27:49 crc kubenswrapper[4765]: I0121 13:27:49.983973 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b6dc74c5-sh9vb" podStartSLOduration=10.983762367 podStartE2EDuration="10.983762367s" podCreationTimestamp="2026-01-21 13:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:27:43.146851901 +0000 UTC m=+1524.164577723" watchObservedRunningTime="2026-01-21 13:27:49.983762367 +0000 UTC m=+1531.001488189" Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.036930 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-rcgzn"] Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.037431 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-rcgzn" podUID="5551d5b9-a9a3-433c-ac02-51355ae7f086" containerName="dnsmasq-dns" containerID="cri-o://22f41c27ae290e5580cc687586dab7608b21a7ac718697dc869908e90898525c" gracePeriod=10 Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.201815 4765 generic.go:334] "Generic (PLEG): container finished" podID="5551d5b9-a9a3-433c-ac02-51355ae7f086" containerID="22f41c27ae290e5580cc687586dab7608b21a7ac718697dc869908e90898525c" exitCode=0 Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.202508 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-rcgzn" event={"ID":"5551d5b9-a9a3-433c-ac02-51355ae7f086","Type":"ContainerDied","Data":"22f41c27ae290e5580cc687586dab7608b21a7ac718697dc869908e90898525c"} Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.597227 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.621622 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-dns-svc\") pod \"5551d5b9-a9a3-433c-ac02-51355ae7f086\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.621666 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-dns-swift-storage-0\") pod \"5551d5b9-a9a3-433c-ac02-51355ae7f086\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.621762 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7jz6\" (UniqueName: \"kubernetes.io/projected/5551d5b9-a9a3-433c-ac02-51355ae7f086-kube-api-access-x7jz6\") pod \"5551d5b9-a9a3-433c-ac02-51355ae7f086\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.621811 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-openstack-edpm-ipam\") pod \"5551d5b9-a9a3-433c-ac02-51355ae7f086\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.621859 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-ovsdbserver-sb\") pod \"5551d5b9-a9a3-433c-ac02-51355ae7f086\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.621891 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-config\") pod \"5551d5b9-a9a3-433c-ac02-51355ae7f086\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.621917 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-ovsdbserver-nb\") pod \"5551d5b9-a9a3-433c-ac02-51355ae7f086\" (UID: \"5551d5b9-a9a3-433c-ac02-51355ae7f086\") " Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.637695 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5551d5b9-a9a3-433c-ac02-51355ae7f086-kube-api-access-x7jz6" (OuterVolumeSpecName: "kube-api-access-x7jz6") pod "5551d5b9-a9a3-433c-ac02-51355ae7f086" (UID: "5551d5b9-a9a3-433c-ac02-51355ae7f086"). InnerVolumeSpecName "kube-api-access-x7jz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.679232 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5551d5b9-a9a3-433c-ac02-51355ae7f086" (UID: "5551d5b9-a9a3-433c-ac02-51355ae7f086"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.721259 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5551d5b9-a9a3-433c-ac02-51355ae7f086" (UID: "5551d5b9-a9a3-433c-ac02-51355ae7f086"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.724231 4765 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.724268 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7jz6\" (UniqueName: \"kubernetes.io/projected/5551d5b9-a9a3-433c-ac02-51355ae7f086-kube-api-access-x7jz6\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.724452 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.735092 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5551d5b9-a9a3-433c-ac02-51355ae7f086" (UID: "5551d5b9-a9a3-433c-ac02-51355ae7f086"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.748529 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5551d5b9-a9a3-433c-ac02-51355ae7f086" (UID: "5551d5b9-a9a3-433c-ac02-51355ae7f086"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.760360 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5551d5b9-a9a3-433c-ac02-51355ae7f086" (UID: "5551d5b9-a9a3-433c-ac02-51355ae7f086"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.766371 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-config" (OuterVolumeSpecName: "config") pod "5551d5b9-a9a3-433c-ac02-51355ae7f086" (UID: "5551d5b9-a9a3-433c-ac02-51355ae7f086"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.826174 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.826225 4765 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-config\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.826237 4765 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:50 crc kubenswrapper[4765]: I0121 13:27:50.826246 4765 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5551d5b9-a9a3-433c-ac02-51355ae7f086-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:51 crc kubenswrapper[4765]: I0121 13:27:51.214545 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-rcgzn" event={"ID":"5551d5b9-a9a3-433c-ac02-51355ae7f086","Type":"ContainerDied","Data":"567b4333485f48789bce75e0121522124ec8e884106d2dc1febc3f7f95589d97"} Jan 21 13:27:51 crc kubenswrapper[4765]: I0121 13:27:51.214597 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-rcgzn" Jan 21 13:27:51 crc kubenswrapper[4765]: I0121 13:27:51.216000 4765 scope.go:117] "RemoveContainer" containerID="22f41c27ae290e5580cc687586dab7608b21a7ac718697dc869908e90898525c" Jan 21 13:27:51 crc kubenswrapper[4765]: I0121 13:27:51.254840 4765 scope.go:117] "RemoveContainer" containerID="42e286e89cb0be50915e9626a864c27af7977a5d904a0679c3b5822a98155a45" Jan 21 13:27:51 crc kubenswrapper[4765]: I0121 13:27:51.261544 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-rcgzn"] Jan 21 13:27:51 crc kubenswrapper[4765]: I0121 13:27:51.274024 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-rcgzn"] Jan 21 13:27:51 crc kubenswrapper[4765]: I0121 13:27:51.625664 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5551d5b9-a9a3-433c-ac02-51355ae7f086" path="/var/lib/kubelet/pods/5551d5b9-a9a3-433c-ac02-51355ae7f086/volumes" Jan 21 13:28:01 crc kubenswrapper[4765]: I0121 13:28:01.321629 4765 generic.go:334] "Generic (PLEG): container finished" podID="997a77bd-3d32-4db3-a34d-588eb0ea88a3" containerID="0f58a0a36bb81d8a06397bbd2e6963f2ab21b8477754fb92da636e262d961aeb" exitCode=0 Jan 21 13:28:01 crc kubenswrapper[4765]: I0121 13:28:01.321717 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"997a77bd-3d32-4db3-a34d-588eb0ea88a3","Type":"ContainerDied","Data":"0f58a0a36bb81d8a06397bbd2e6963f2ab21b8477754fb92da636e262d961aeb"} Jan 21 13:28:01 crc kubenswrapper[4765]: I0121 13:28:01.326779 4765 generic.go:334] "Generic (PLEG): container finished" podID="f302fd12-fe7e-455b-94f0-aafe7ddb95f2" containerID="d9d09475d2c7ab086a3de8f4c5c6a4a9a3304818ab5feaff62106a00e5991cb6" exitCode=0 Jan 21 13:28:01 crc kubenswrapper[4765]: I0121 13:28:01.326832 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f302fd12-fe7e-455b-94f0-aafe7ddb95f2","Type":"ContainerDied","Data":"d9d09475d2c7ab086a3de8f4c5c6a4a9a3304818ab5feaff62106a00e5991cb6"} Jan 21 13:28:02 crc kubenswrapper[4765]: I0121 13:28:02.336953 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f302fd12-fe7e-455b-94f0-aafe7ddb95f2","Type":"ContainerStarted","Data":"d861081e66d0c3e3ce095809d2fa179b9e331125e1343df327ec39410be30a9c"} Jan 21 13:28:02 crc kubenswrapper[4765]: I0121 13:28:02.337479 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:28:02 crc kubenswrapper[4765]: I0121 13:28:02.339088 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"997a77bd-3d32-4db3-a34d-588eb0ea88a3","Type":"ContainerStarted","Data":"7ebff1888bf068c5f2708f4dd2642d59e7ed28ba57715f8790096f2dbf1ae0a9"} Jan 21 13:28:02 crc kubenswrapper[4765]: I0121 13:28:02.339264 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 13:28:02 crc kubenswrapper[4765]: I0121 13:28:02.361614 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.361594689 podStartE2EDuration="38.361594689s" podCreationTimestamp="2026-01-21 13:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:28:02.358430896 +0000 UTC m=+1543.376156738" watchObservedRunningTime="2026-01-21 13:28:02.361594689 +0000 UTC m=+1543.379320511" Jan 21 13:28:02 crc kubenswrapper[4765]: I0121 13:28:02.402951 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.402932047 podStartE2EDuration="38.402932047s" podCreationTimestamp="2026-01-21 13:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:28:02.398807176 +0000 UTC m=+1543.416532998" watchObservedRunningTime="2026-01-21 13:28:02.402932047 +0000 UTC m=+1543.420657879" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.877701 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp"] Jan 21 13:28:08 crc kubenswrapper[4765]: E0121 13:28:08.879872 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedcf2ee-ca90-440d-bc45-7022079ed9e4" containerName="init" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.879967 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedcf2ee-ca90-440d-bc45-7022079ed9e4" containerName="init" Jan 21 13:28:08 crc kubenswrapper[4765]: E0121 13:28:08.880049 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedcf2ee-ca90-440d-bc45-7022079ed9e4" containerName="dnsmasq-dns" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.880114 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedcf2ee-ca90-440d-bc45-7022079ed9e4" containerName="dnsmasq-dns" Jan 21 13:28:08 crc kubenswrapper[4765]: E0121 13:28:08.880192 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5551d5b9-a9a3-433c-ac02-51355ae7f086" containerName="dnsmasq-dns" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.880285 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="5551d5b9-a9a3-433c-ac02-51355ae7f086" containerName="dnsmasq-dns" Jan 21 13:28:08 crc kubenswrapper[4765]: E0121 13:28:08.880361 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5551d5b9-a9a3-433c-ac02-51355ae7f086" containerName="init" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.880425 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="5551d5b9-a9a3-433c-ac02-51355ae7f086" containerName="init" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.880716 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="bedcf2ee-ca90-440d-bc45-7022079ed9e4" containerName="dnsmasq-dns" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.880815 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="5551d5b9-a9a3-433c-ac02-51355ae7f086" containerName="dnsmasq-dns" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.881703 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.889721 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.889938 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.890112 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.904924 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.906884 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp"] Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.925239 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.925306 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.925387 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:08 crc kubenswrapper[4765]: I0121 13:28:08.925424 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zrdp\" (UniqueName: \"kubernetes.io/projected/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-kube-api-access-6zrdp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:09 crc kubenswrapper[4765]: I0121 13:28:09.027204 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:09 crc kubenswrapper[4765]: I0121 13:28:09.027289 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:09 crc kubenswrapper[4765]: I0121 13:28:09.027368 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:09 crc kubenswrapper[4765]: I0121 13:28:09.027403 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zrdp\" (UniqueName: \"kubernetes.io/projected/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-kube-api-access-6zrdp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:09 crc kubenswrapper[4765]: I0121 13:28:09.034124 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:09 crc kubenswrapper[4765]: I0121 13:28:09.040868 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:09 crc kubenswrapper[4765]: I0121 13:28:09.042195 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:09 crc kubenswrapper[4765]: I0121 13:28:09.045423 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zrdp\" (UniqueName: \"kubernetes.io/projected/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-kube-api-access-6zrdp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:09 crc kubenswrapper[4765]: I0121 13:28:09.220110 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:09 crc kubenswrapper[4765]: I0121 13:28:09.957810 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp"] Jan 21 13:28:09 crc kubenswrapper[4765]: W0121 13:28:09.977088 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0dd97fb4_c9c8_4ea0_b0c9_69de90bfde12.slice/crio-53b65d40b8e24d61c60f63520ba5859772f2f21481ff76f5854379398d46d1b0 WatchSource:0}: Error finding container 53b65d40b8e24d61c60f63520ba5859772f2f21481ff76f5854379398d46d1b0: Status 404 returned error can't find the container with id 53b65d40b8e24d61c60f63520ba5859772f2f21481ff76f5854379398d46d1b0 Jan 21 13:28:10 crc kubenswrapper[4765]: I0121 13:28:10.417972 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" event={"ID":"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12","Type":"ContainerStarted","Data":"53b65d40b8e24d61c60f63520ba5859772f2f21481ff76f5854379398d46d1b0"} Jan 21 13:28:15 crc kubenswrapper[4765]: I0121 13:28:15.288546 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 13:28:15 crc kubenswrapper[4765]: I0121 13:28:15.382470 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 13:28:25 crc kubenswrapper[4765]: E0121 13:28:25.488332 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 21 13:28:25 crc kubenswrapper[4765]: E0121 13:28:25.490426 4765 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 13:28:25 crc kubenswrapper[4765]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 21 13:28:25 crc kubenswrapper[4765]: - hosts: all Jan 21 13:28:25 crc kubenswrapper[4765]: strategy: linear Jan 21 13:28:25 crc kubenswrapper[4765]: tasks: Jan 21 13:28:25 crc kubenswrapper[4765]: - name: Enable podified-repos Jan 21 13:28:25 crc kubenswrapper[4765]: become: true Jan 21 13:28:25 crc kubenswrapper[4765]: ansible.builtin.shell: | Jan 21 13:28:25 crc kubenswrapper[4765]: set -euxo pipefail Jan 21 13:28:25 crc kubenswrapper[4765]: pushd /var/tmp Jan 21 13:28:25 crc kubenswrapper[4765]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 21 13:28:25 crc kubenswrapper[4765]: pushd repo-setup-main Jan 21 13:28:25 crc kubenswrapper[4765]: python3 -m venv ./venv Jan 21 13:28:25 crc kubenswrapper[4765]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 21 13:28:25 crc kubenswrapper[4765]: ./venv/bin/repo-setup current-podified -b antelope Jan 21 13:28:25 crc kubenswrapper[4765]: popd Jan 21 13:28:25 crc kubenswrapper[4765]: rm -rf repo-setup-main Jan 21 13:28:25 crc kubenswrapper[4765]: Jan 21 13:28:25 crc kubenswrapper[4765]: Jan 21 13:28:25 crc kubenswrapper[4765]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 21 13:28:25 crc kubenswrapper[4765]: edpm_override_hosts: openstack-edpm-ipam Jan 21 13:28:25 crc kubenswrapper[4765]: edpm_service_type: repo-setup Jan 21 13:28:25 crc kubenswrapper[4765]: Jan 21 13:28:25 crc kubenswrapper[4765]: Jan 21 13:28:25 crc kubenswrapper[4765]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zrdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp_openstack(0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 21 13:28:25 crc kubenswrapper[4765]: > logger="UnhandledError" Jan 21 13:28:25 crc kubenswrapper[4765]: E0121 13:28:25.492474 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" podUID="0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12" Jan 21 13:28:25 crc kubenswrapper[4765]: E0121 13:28:25.592600 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" podUID="0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12" Jan 21 13:28:35 crc kubenswrapper[4765]: I0121 13:28:35.595880 4765 scope.go:117] "RemoveContainer" containerID="dc35c6079cde6ca926273ed0c9597921be6ae633b635627d413e8d212906e8d1" Jan 21 13:28:35 crc kubenswrapper[4765]: I0121 13:28:35.629394 4765 scope.go:117] "RemoveContainer" containerID="4d6767471925da961e268db4e379427ead8911869ddb04bf8fbf5ba5b3a25524" Jan 21 13:28:35 crc kubenswrapper[4765]: I0121 13:28:35.676757 4765 scope.go:117] "RemoveContainer" containerID="c342cbc167565b0b099a201b8cb67b39137ac2bc568d29c9336e560cfdf9616d" Jan 21 13:28:39 crc kubenswrapper[4765]: I0121 13:28:39.882511 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:28:40 crc kubenswrapper[4765]: I0121 13:28:40.951064 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" event={"ID":"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12","Type":"ContainerStarted","Data":"4fdc9657d3211d7048d55d36b9baef1fd3e5cd25e72eba1d98aaad7e718ed07e"} Jan 21 13:28:40 crc kubenswrapper[4765]: I0121 13:28:40.989856 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" podStartSLOduration=3.092853444 podStartE2EDuration="32.989830213s" podCreationTimestamp="2026-01-21 13:28:08 +0000 UTC" firstStartedPulling="2026-01-21 13:28:09.98214894 +0000 UTC m=+1550.999874762" lastFinishedPulling="2026-01-21 13:28:39.879125709 +0000 UTC m=+1580.896851531" observedRunningTime="2026-01-21 13:28:40.978472711 +0000 UTC m=+1581.996198533" watchObservedRunningTime="2026-01-21 13:28:40.989830213 +0000 UTC m=+1582.007556035" Jan 21 13:28:44 crc kubenswrapper[4765]: I0121 13:28:44.445938 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:28:44 crc kubenswrapper[4765]: I0121 13:28:44.446595 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:28:54 crc kubenswrapper[4765]: I0121 13:28:54.073873 4765 generic.go:334] "Generic (PLEG): container finished" podID="0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12" containerID="4fdc9657d3211d7048d55d36b9baef1fd3e5cd25e72eba1d98aaad7e718ed07e" exitCode=0 Jan 21 13:28:54 crc kubenswrapper[4765]: I0121 13:28:54.074499 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" event={"ID":"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12","Type":"ContainerDied","Data":"4fdc9657d3211d7048d55d36b9baef1fd3e5cd25e72eba1d98aaad7e718ed07e"} Jan 21 13:28:55 crc kubenswrapper[4765]: I0121 13:28:55.616203 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:55 crc kubenswrapper[4765]: I0121 13:28:55.787792 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zrdp\" (UniqueName: \"kubernetes.io/projected/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-kube-api-access-6zrdp\") pod \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " Jan 21 13:28:55 crc kubenswrapper[4765]: I0121 13:28:55.788592 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-repo-setup-combined-ca-bundle\") pod \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " Jan 21 13:28:55 crc kubenswrapper[4765]: I0121 13:28:55.788699 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-ssh-key-openstack-edpm-ipam\") pod \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " Jan 21 13:28:55 crc kubenswrapper[4765]: I0121 13:28:55.788783 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-inventory\") pod \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\" (UID: \"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12\") " Jan 21 13:28:55 crc kubenswrapper[4765]: I0121 13:28:55.795462 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12" (UID: "0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:28:55 crc kubenswrapper[4765]: I0121 13:28:55.814751 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-kube-api-access-6zrdp" (OuterVolumeSpecName: "kube-api-access-6zrdp") pod "0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12" (UID: "0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12"). InnerVolumeSpecName "kube-api-access-6zrdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:28:55 crc kubenswrapper[4765]: I0121 13:28:55.831788 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-inventory" (OuterVolumeSpecName: "inventory") pod "0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12" (UID: "0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:28:55 crc kubenswrapper[4765]: I0121 13:28:55.834017 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12" (UID: "0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:28:55 crc kubenswrapper[4765]: I0121 13:28:55.890849 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:55 crc kubenswrapper[4765]: I0121 13:28:55.890882 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zrdp\" (UniqueName: \"kubernetes.io/projected/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-kube-api-access-6zrdp\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:55 crc kubenswrapper[4765]: I0121 13:28:55.890893 4765 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:55 crc kubenswrapper[4765]: I0121 13:28:55.890902 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.093660 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" event={"ID":"0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12","Type":"ContainerDied","Data":"53b65d40b8e24d61c60f63520ba5859772f2f21481ff76f5854379398d46d1b0"} Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.094019 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53b65d40b8e24d61c60f63520ba5859772f2f21481ff76f5854379398d46d1b0" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.093723 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.194037 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn"] Jan 21 13:28:56 crc kubenswrapper[4765]: E0121 13:28:56.194604 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.194629 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.194869 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.195625 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.199198 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.199900 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.204948 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.207880 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn"] Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.211121 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.298597 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbl2x\" (UniqueName: \"kubernetes.io/projected/2f4e0a44-0962-4477-9526-4df004dd3625-kube-api-access-kbl2x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7wzhn\" (UID: \"2f4e0a44-0962-4477-9526-4df004dd3625\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.298803 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f4e0a44-0962-4477-9526-4df004dd3625-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7wzhn\" (UID: \"2f4e0a44-0962-4477-9526-4df004dd3625\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.298892 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f4e0a44-0962-4477-9526-4df004dd3625-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7wzhn\" (UID: \"2f4e0a44-0962-4477-9526-4df004dd3625\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.400815 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbl2x\" (UniqueName: \"kubernetes.io/projected/2f4e0a44-0962-4477-9526-4df004dd3625-kube-api-access-kbl2x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7wzhn\" (UID: \"2f4e0a44-0962-4477-9526-4df004dd3625\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.400868 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f4e0a44-0962-4477-9526-4df004dd3625-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7wzhn\" (UID: \"2f4e0a44-0962-4477-9526-4df004dd3625\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.400909 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f4e0a44-0962-4477-9526-4df004dd3625-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7wzhn\" (UID: \"2f4e0a44-0962-4477-9526-4df004dd3625\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.406565 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f4e0a44-0962-4477-9526-4df004dd3625-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7wzhn\" (UID: \"2f4e0a44-0962-4477-9526-4df004dd3625\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.406956 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f4e0a44-0962-4477-9526-4df004dd3625-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7wzhn\" (UID: \"2f4e0a44-0962-4477-9526-4df004dd3625\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.423277 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbl2x\" (UniqueName: \"kubernetes.io/projected/2f4e0a44-0962-4477-9526-4df004dd3625-kube-api-access-kbl2x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7wzhn\" (UID: \"2f4e0a44-0962-4477-9526-4df004dd3625\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" Jan 21 13:28:56 crc kubenswrapper[4765]: I0121 13:28:56.515472 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" Jan 21 13:28:57 crc kubenswrapper[4765]: I0121 13:28:57.087986 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn"] Jan 21 13:28:57 crc kubenswrapper[4765]: W0121 13:28:57.093417 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f4e0a44_0962_4477_9526_4df004dd3625.slice/crio-6475a832d76ce74314cb9c94fe88f7738452200867e6b7bf516d380ce8264326 WatchSource:0}: Error finding container 6475a832d76ce74314cb9c94fe88f7738452200867e6b7bf516d380ce8264326: Status 404 returned error can't find the container with id 6475a832d76ce74314cb9c94fe88f7738452200867e6b7bf516d380ce8264326 Jan 21 13:28:57 crc kubenswrapper[4765]: I0121 13:28:57.117449 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" event={"ID":"2f4e0a44-0962-4477-9526-4df004dd3625","Type":"ContainerStarted","Data":"6475a832d76ce74314cb9c94fe88f7738452200867e6b7bf516d380ce8264326"} Jan 21 13:29:00 crc kubenswrapper[4765]: I0121 13:29:00.145092 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" event={"ID":"2f4e0a44-0962-4477-9526-4df004dd3625","Type":"ContainerStarted","Data":"7314f542674fa0b19cd2290872db33a60ab32a0a54557afc744382e13d63b67c"} Jan 21 13:29:01 crc kubenswrapper[4765]: I0121 13:29:01.195443 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" podStartSLOduration=3.093830238 podStartE2EDuration="5.195417055s" podCreationTimestamp="2026-01-21 13:28:56 +0000 UTC" firstStartedPulling="2026-01-21 13:28:57.096810636 +0000 UTC m=+1598.114536468" lastFinishedPulling="2026-01-21 13:28:59.198397463 +0000 UTC m=+1600.216123285" observedRunningTime="2026-01-21 13:29:01.179472079 +0000 UTC m=+1602.197197901" watchObservedRunningTime="2026-01-21 13:29:01.195417055 +0000 UTC m=+1602.213142877" Jan 21 13:29:05 crc kubenswrapper[4765]: I0121 13:29:05.198758 4765 generic.go:334] "Generic (PLEG): container finished" podID="2f4e0a44-0962-4477-9526-4df004dd3625" containerID="7314f542674fa0b19cd2290872db33a60ab32a0a54557afc744382e13d63b67c" exitCode=0 Jan 21 13:29:05 crc kubenswrapper[4765]: I0121 13:29:05.198890 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" event={"ID":"2f4e0a44-0962-4477-9526-4df004dd3625","Type":"ContainerDied","Data":"7314f542674fa0b19cd2290872db33a60ab32a0a54557afc744382e13d63b67c"} Jan 21 13:29:06 crc kubenswrapper[4765]: I0121 13:29:06.684744 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" Jan 21 13:29:06 crc kubenswrapper[4765]: I0121 13:29:06.776751 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f4e0a44-0962-4477-9526-4df004dd3625-inventory\") pod \"2f4e0a44-0962-4477-9526-4df004dd3625\" (UID: \"2f4e0a44-0962-4477-9526-4df004dd3625\") " Jan 21 13:29:06 crc kubenswrapper[4765]: I0121 13:29:06.776844 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f4e0a44-0962-4477-9526-4df004dd3625-ssh-key-openstack-edpm-ipam\") pod \"2f4e0a44-0962-4477-9526-4df004dd3625\" (UID: \"2f4e0a44-0962-4477-9526-4df004dd3625\") " Jan 21 13:29:06 crc kubenswrapper[4765]: I0121 13:29:06.776947 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbl2x\" (UniqueName: \"kubernetes.io/projected/2f4e0a44-0962-4477-9526-4df004dd3625-kube-api-access-kbl2x\") pod \"2f4e0a44-0962-4477-9526-4df004dd3625\" (UID: \"2f4e0a44-0962-4477-9526-4df004dd3625\") " Jan 21 13:29:06 crc kubenswrapper[4765]: I0121 13:29:06.783017 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f4e0a44-0962-4477-9526-4df004dd3625-kube-api-access-kbl2x" (OuterVolumeSpecName: "kube-api-access-kbl2x") pod "2f4e0a44-0962-4477-9526-4df004dd3625" (UID: "2f4e0a44-0962-4477-9526-4df004dd3625"). InnerVolumeSpecName "kube-api-access-kbl2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:29:06 crc kubenswrapper[4765]: I0121 13:29:06.805172 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4e0a44-0962-4477-9526-4df004dd3625-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2f4e0a44-0962-4477-9526-4df004dd3625" (UID: "2f4e0a44-0962-4477-9526-4df004dd3625"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:29:06 crc kubenswrapper[4765]: I0121 13:29:06.818014 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4e0a44-0962-4477-9526-4df004dd3625-inventory" (OuterVolumeSpecName: "inventory") pod "2f4e0a44-0962-4477-9526-4df004dd3625" (UID: "2f4e0a44-0962-4477-9526-4df004dd3625"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:29:06 crc kubenswrapper[4765]: I0121 13:29:06.879619 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f4e0a44-0962-4477-9526-4df004dd3625-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:06 crc kubenswrapper[4765]: I0121 13:29:06.879659 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f4e0a44-0962-4477-9526-4df004dd3625-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:06 crc kubenswrapper[4765]: I0121 13:29:06.879675 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbl2x\" (UniqueName: \"kubernetes.io/projected/2f4e0a44-0962-4477-9526-4df004dd3625-kube-api-access-kbl2x\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.219599 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" event={"ID":"2f4e0a44-0962-4477-9526-4df004dd3625","Type":"ContainerDied","Data":"6475a832d76ce74314cb9c94fe88f7738452200867e6b7bf516d380ce8264326"} Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.219647 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6475a832d76ce74314cb9c94fe88f7738452200867e6b7bf516d380ce8264326" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.219733 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7wzhn" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.313320 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2"] Jan 21 13:29:07 crc kubenswrapper[4765]: E0121 13:29:07.316282 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4e0a44-0962-4477-9526-4df004dd3625" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.316314 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4e0a44-0962-4477-9526-4df004dd3625" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.316792 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f4e0a44-0962-4477-9526-4df004dd3625" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.319905 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.322738 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.323203 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.323356 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.327876 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.335457 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2"] Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.388758 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.389110 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spxbn\" (UniqueName: \"kubernetes.io/projected/244e5c68-a93a-44e7-a8fd-d4368ee754bd-kube-api-access-spxbn\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.389158 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.389357 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.490974 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.491125 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.491249 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spxbn\" (UniqueName: \"kubernetes.io/projected/244e5c68-a93a-44e7-a8fd-d4368ee754bd-kube-api-access-spxbn\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.491314 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.496898 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.499023 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.501874 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.511392 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spxbn\" (UniqueName: \"kubernetes.io/projected/244e5c68-a93a-44e7-a8fd-d4368ee754bd-kube-api-access-spxbn\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:29:07 crc kubenswrapper[4765]: I0121 13:29:07.661755 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:29:08 crc kubenswrapper[4765]: I0121 13:29:08.270833 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2"] Jan 21 13:29:09 crc kubenswrapper[4765]: I0121 13:29:09.237350 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" event={"ID":"244e5c68-a93a-44e7-a8fd-d4368ee754bd","Type":"ContainerStarted","Data":"0cdcaa0739f54246c903bb553d1b62c80e069d520f6b26851d5996567aa1371a"} Jan 21 13:29:09 crc kubenswrapper[4765]: I0121 13:29:09.237607 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" event={"ID":"244e5c68-a93a-44e7-a8fd-d4368ee754bd","Type":"ContainerStarted","Data":"5b031172f3a56fd74a853282921a5c6311b7d801095ea2bda136ccf61f04c39a"} Jan 21 13:29:09 crc kubenswrapper[4765]: I0121 13:29:09.255853 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" podStartSLOduration=1.7531308380000001 podStartE2EDuration="2.255832032s" podCreationTimestamp="2026-01-21 13:29:07 +0000 UTC" firstStartedPulling="2026-01-21 13:29:08.279851446 +0000 UTC m=+1609.297577278" lastFinishedPulling="2026-01-21 13:29:08.78255265 +0000 UTC m=+1609.800278472" observedRunningTime="2026-01-21 13:29:09.251088394 +0000 UTC m=+1610.268814216" watchObservedRunningTime="2026-01-21 13:29:09.255832032 +0000 UTC m=+1610.273557854" Jan 21 13:29:14 crc kubenswrapper[4765]: I0121 13:29:14.445720 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:29:14 crc kubenswrapper[4765]: I0121 13:29:14.446248 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:29:20 crc kubenswrapper[4765]: I0121 13:29:20.150082 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b48j6"] Jan 21 13:29:20 crc kubenswrapper[4765]: I0121 13:29:20.157905 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:20 crc kubenswrapper[4765]: I0121 13:29:20.191518 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b48j6"] Jan 21 13:29:20 crc kubenswrapper[4765]: I0121 13:29:20.288359 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-utilities\") pod \"redhat-marketplace-b48j6\" (UID: \"3c5a1c54-b991-4ef4-865e-ea4fec88cd34\") " pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:20 crc kubenswrapper[4765]: I0121 13:29:20.288661 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt8cb\" (UniqueName: \"kubernetes.io/projected/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-kube-api-access-mt8cb\") pod \"redhat-marketplace-b48j6\" (UID: \"3c5a1c54-b991-4ef4-865e-ea4fec88cd34\") " pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:20 crc kubenswrapper[4765]: I0121 13:29:20.288686 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-catalog-content\") pod \"redhat-marketplace-b48j6\" (UID: \"3c5a1c54-b991-4ef4-865e-ea4fec88cd34\") " pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:20 crc kubenswrapper[4765]: I0121 13:29:20.390660 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-utilities\") pod \"redhat-marketplace-b48j6\" (UID: \"3c5a1c54-b991-4ef4-865e-ea4fec88cd34\") " pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:20 crc kubenswrapper[4765]: I0121 13:29:20.390780 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt8cb\" (UniqueName: \"kubernetes.io/projected/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-kube-api-access-mt8cb\") pod \"redhat-marketplace-b48j6\" (UID: \"3c5a1c54-b991-4ef4-865e-ea4fec88cd34\") " pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:20 crc kubenswrapper[4765]: I0121 13:29:20.390813 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-catalog-content\") pod \"redhat-marketplace-b48j6\" (UID: \"3c5a1c54-b991-4ef4-865e-ea4fec88cd34\") " pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:20 crc kubenswrapper[4765]: I0121 13:29:20.391558 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-catalog-content\") pod \"redhat-marketplace-b48j6\" (UID: \"3c5a1c54-b991-4ef4-865e-ea4fec88cd34\") " pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:20 crc kubenswrapper[4765]: I0121 13:29:20.391678 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-utilities\") pod \"redhat-marketplace-b48j6\" (UID: \"3c5a1c54-b991-4ef4-865e-ea4fec88cd34\") " pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:20 crc kubenswrapper[4765]: I0121 13:29:20.417279 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt8cb\" (UniqueName: \"kubernetes.io/projected/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-kube-api-access-mt8cb\") pod \"redhat-marketplace-b48j6\" (UID: \"3c5a1c54-b991-4ef4-865e-ea4fec88cd34\") " pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:20 crc kubenswrapper[4765]: I0121 13:29:20.477401 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:20 crc kubenswrapper[4765]: W0121 13:29:20.993503 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c5a1c54_b991_4ef4_865e_ea4fec88cd34.slice/crio-d9d9e490c3372ecd52fca09b9699f18e0f8dc62223fe4c2a472bddad6a758d8e WatchSource:0}: Error finding container d9d9e490c3372ecd52fca09b9699f18e0f8dc62223fe4c2a472bddad6a758d8e: Status 404 returned error can't find the container with id d9d9e490c3372ecd52fca09b9699f18e0f8dc62223fe4c2a472bddad6a758d8e Jan 21 13:29:20 crc kubenswrapper[4765]: I0121 13:29:20.996125 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b48j6"] Jan 21 13:29:21 crc kubenswrapper[4765]: I0121 13:29:21.346450 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b48j6" event={"ID":"3c5a1c54-b991-4ef4-865e-ea4fec88cd34","Type":"ContainerStarted","Data":"d9d9e490c3372ecd52fca09b9699f18e0f8dc62223fe4c2a472bddad6a758d8e"} Jan 21 13:29:22 crc kubenswrapper[4765]: I0121 13:29:22.370496 4765 generic.go:334] "Generic (PLEG): container finished" podID="3c5a1c54-b991-4ef4-865e-ea4fec88cd34" containerID="de86bfb0ef86ff10641aa960f56caf751ad9717506e025c899bea17fb0ec3ccb" exitCode=0 Jan 21 13:29:22 crc kubenswrapper[4765]: I0121 13:29:22.370836 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b48j6" event={"ID":"3c5a1c54-b991-4ef4-865e-ea4fec88cd34","Type":"ContainerDied","Data":"de86bfb0ef86ff10641aa960f56caf751ad9717506e025c899bea17fb0ec3ccb"} Jan 21 13:29:23 crc kubenswrapper[4765]: I0121 13:29:23.387249 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b48j6" event={"ID":"3c5a1c54-b991-4ef4-865e-ea4fec88cd34","Type":"ContainerStarted","Data":"16c11e0e200e86f05b5d414edae81b690b5534d1571d86a282a3118543f2f8f2"} Jan 21 13:29:24 crc kubenswrapper[4765]: I0121 13:29:24.397883 4765 generic.go:334] "Generic (PLEG): container finished" podID="3c5a1c54-b991-4ef4-865e-ea4fec88cd34" containerID="16c11e0e200e86f05b5d414edae81b690b5534d1571d86a282a3118543f2f8f2" exitCode=0 Jan 21 13:29:24 crc kubenswrapper[4765]: I0121 13:29:24.397935 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b48j6" event={"ID":"3c5a1c54-b991-4ef4-865e-ea4fec88cd34","Type":"ContainerDied","Data":"16c11e0e200e86f05b5d414edae81b690b5534d1571d86a282a3118543f2f8f2"} Jan 21 13:29:25 crc kubenswrapper[4765]: I0121 13:29:25.411180 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b48j6" event={"ID":"3c5a1c54-b991-4ef4-865e-ea4fec88cd34","Type":"ContainerStarted","Data":"e54c22043fea31535039786814371aa5e0a020ac65c0da7ca9e09b2709dee153"} Jan 21 13:29:25 crc kubenswrapper[4765]: I0121 13:29:25.433655 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b48j6" podStartSLOduration=2.9938174159999997 podStartE2EDuration="5.43362927s" podCreationTimestamp="2026-01-21 13:29:20 +0000 UTC" firstStartedPulling="2026-01-21 13:29:22.373459311 +0000 UTC m=+1623.391185133" lastFinishedPulling="2026-01-21 13:29:24.813271165 +0000 UTC m=+1625.830996987" observedRunningTime="2026-01-21 13:29:25.430663394 +0000 UTC m=+1626.448389226" watchObservedRunningTime="2026-01-21 13:29:25.43362927 +0000 UTC m=+1626.451355112" Jan 21 13:29:30 crc kubenswrapper[4765]: I0121 13:29:30.478310 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:30 crc kubenswrapper[4765]: I0121 13:29:30.480858 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:30 crc kubenswrapper[4765]: I0121 13:29:30.543734 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:31 crc kubenswrapper[4765]: I0121 13:29:31.551602 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:31 crc kubenswrapper[4765]: I0121 13:29:31.636294 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b48j6"] Jan 21 13:29:33 crc kubenswrapper[4765]: I0121 13:29:33.522737 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b48j6" podUID="3c5a1c54-b991-4ef4-865e-ea4fec88cd34" containerName="registry-server" containerID="cri-o://e54c22043fea31535039786814371aa5e0a020ac65c0da7ca9e09b2709dee153" gracePeriod=2 Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.109752 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.207081 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-catalog-content\") pod \"3c5a1c54-b991-4ef4-865e-ea4fec88cd34\" (UID: \"3c5a1c54-b991-4ef4-865e-ea4fec88cd34\") " Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.207129 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt8cb\" (UniqueName: \"kubernetes.io/projected/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-kube-api-access-mt8cb\") pod \"3c5a1c54-b991-4ef4-865e-ea4fec88cd34\" (UID: \"3c5a1c54-b991-4ef4-865e-ea4fec88cd34\") " Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.207158 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-utilities\") pod \"3c5a1c54-b991-4ef4-865e-ea4fec88cd34\" (UID: \"3c5a1c54-b991-4ef4-865e-ea4fec88cd34\") " Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.207941 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-utilities" (OuterVolumeSpecName: "utilities") pod "3c5a1c54-b991-4ef4-865e-ea4fec88cd34" (UID: "3c5a1c54-b991-4ef4-865e-ea4fec88cd34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.212241 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-kube-api-access-mt8cb" (OuterVolumeSpecName: "kube-api-access-mt8cb") pod "3c5a1c54-b991-4ef4-865e-ea4fec88cd34" (UID: "3c5a1c54-b991-4ef4-865e-ea4fec88cd34"). InnerVolumeSpecName "kube-api-access-mt8cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.234419 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c5a1c54-b991-4ef4-865e-ea4fec88cd34" (UID: "3c5a1c54-b991-4ef4-865e-ea4fec88cd34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.309234 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.309265 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt8cb\" (UniqueName: \"kubernetes.io/projected/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-kube-api-access-mt8cb\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.309276 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c5a1c54-b991-4ef4-865e-ea4fec88cd34-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.533985 4765 generic.go:334] "Generic (PLEG): container finished" podID="3c5a1c54-b991-4ef4-865e-ea4fec88cd34" containerID="e54c22043fea31535039786814371aa5e0a020ac65c0da7ca9e09b2709dee153" exitCode=0 Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.534039 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b48j6" event={"ID":"3c5a1c54-b991-4ef4-865e-ea4fec88cd34","Type":"ContainerDied","Data":"e54c22043fea31535039786814371aa5e0a020ac65c0da7ca9e09b2709dee153"} Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.534056 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b48j6" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.534126 4765 scope.go:117] "RemoveContainer" containerID="e54c22043fea31535039786814371aa5e0a020ac65c0da7ca9e09b2709dee153" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.534112 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b48j6" event={"ID":"3c5a1c54-b991-4ef4-865e-ea4fec88cd34","Type":"ContainerDied","Data":"d9d9e490c3372ecd52fca09b9699f18e0f8dc62223fe4c2a472bddad6a758d8e"} Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.575412 4765 scope.go:117] "RemoveContainer" containerID="16c11e0e200e86f05b5d414edae81b690b5534d1571d86a282a3118543f2f8f2" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.581327 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b48j6"] Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.595924 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b48j6"] Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.616392 4765 scope.go:117] "RemoveContainer" containerID="de86bfb0ef86ff10641aa960f56caf751ad9717506e025c899bea17fb0ec3ccb" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.681968 4765 scope.go:117] "RemoveContainer" containerID="e54c22043fea31535039786814371aa5e0a020ac65c0da7ca9e09b2709dee153" Jan 21 13:29:34 crc kubenswrapper[4765]: E0121 13:29:34.682457 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e54c22043fea31535039786814371aa5e0a020ac65c0da7ca9e09b2709dee153\": container with ID starting with e54c22043fea31535039786814371aa5e0a020ac65c0da7ca9e09b2709dee153 not found: ID does not exist" containerID="e54c22043fea31535039786814371aa5e0a020ac65c0da7ca9e09b2709dee153" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.682488 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54c22043fea31535039786814371aa5e0a020ac65c0da7ca9e09b2709dee153"} err="failed to get container status \"e54c22043fea31535039786814371aa5e0a020ac65c0da7ca9e09b2709dee153\": rpc error: code = NotFound desc = could not find container \"e54c22043fea31535039786814371aa5e0a020ac65c0da7ca9e09b2709dee153\": container with ID starting with e54c22043fea31535039786814371aa5e0a020ac65c0da7ca9e09b2709dee153 not found: ID does not exist" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.682514 4765 scope.go:117] "RemoveContainer" containerID="16c11e0e200e86f05b5d414edae81b690b5534d1571d86a282a3118543f2f8f2" Jan 21 13:29:34 crc kubenswrapper[4765]: E0121 13:29:34.682916 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16c11e0e200e86f05b5d414edae81b690b5534d1571d86a282a3118543f2f8f2\": container with ID starting with 16c11e0e200e86f05b5d414edae81b690b5534d1571d86a282a3118543f2f8f2 not found: ID does not exist" containerID="16c11e0e200e86f05b5d414edae81b690b5534d1571d86a282a3118543f2f8f2" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.683013 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16c11e0e200e86f05b5d414edae81b690b5534d1571d86a282a3118543f2f8f2"} err="failed to get container status \"16c11e0e200e86f05b5d414edae81b690b5534d1571d86a282a3118543f2f8f2\": rpc error: code = NotFound desc = could not find container \"16c11e0e200e86f05b5d414edae81b690b5534d1571d86a282a3118543f2f8f2\": container with ID starting with 16c11e0e200e86f05b5d414edae81b690b5534d1571d86a282a3118543f2f8f2 not found: ID does not exist" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.683105 4765 scope.go:117] "RemoveContainer" containerID="de86bfb0ef86ff10641aa960f56caf751ad9717506e025c899bea17fb0ec3ccb" Jan 21 13:29:34 crc kubenswrapper[4765]: E0121 13:29:34.683466 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de86bfb0ef86ff10641aa960f56caf751ad9717506e025c899bea17fb0ec3ccb\": container with ID starting with de86bfb0ef86ff10641aa960f56caf751ad9717506e025c899bea17fb0ec3ccb not found: ID does not exist" containerID="de86bfb0ef86ff10641aa960f56caf751ad9717506e025c899bea17fb0ec3ccb" Jan 21 13:29:34 crc kubenswrapper[4765]: I0121 13:29:34.683570 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de86bfb0ef86ff10641aa960f56caf751ad9717506e025c899bea17fb0ec3ccb"} err="failed to get container status \"de86bfb0ef86ff10641aa960f56caf751ad9717506e025c899bea17fb0ec3ccb\": rpc error: code = NotFound desc = could not find container \"de86bfb0ef86ff10641aa960f56caf751ad9717506e025c899bea17fb0ec3ccb\": container with ID starting with de86bfb0ef86ff10641aa960f56caf751ad9717506e025c899bea17fb0ec3ccb not found: ID does not exist" Jan 21 13:29:35 crc kubenswrapper[4765]: I0121 13:29:35.629359 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c5a1c54-b991-4ef4-865e-ea4fec88cd34" path="/var/lib/kubelet/pods/3c5a1c54-b991-4ef4-865e-ea4fec88cd34/volumes" Jan 21 13:29:35 crc kubenswrapper[4765]: I0121 13:29:35.807678 4765 scope.go:117] "RemoveContainer" containerID="f8a73ccbe593ba79f289915182b7e1741421333934829e5bb29ff7ccc180175a" Jan 21 13:29:35 crc kubenswrapper[4765]: I0121 13:29:35.827610 4765 scope.go:117] "RemoveContainer" containerID="c2cc1c8c9784782125e37e34ddd38da385ce68363964beb236e4a274e1b53cc1" Jan 21 13:29:35 crc kubenswrapper[4765]: I0121 13:29:35.846146 4765 scope.go:117] "RemoveContainer" containerID="8d605cc7ccf3d837cef8b78c80357c15df771a2a7f737872c369b5e655344bf8" Jan 21 13:29:35 crc kubenswrapper[4765]: I0121 13:29:35.899071 4765 scope.go:117] "RemoveContainer" containerID="f147b57d7bb3d9a984b44d1d501cab848a2b423001fc765a7195550a05e30cf9" Jan 21 13:29:35 crc kubenswrapper[4765]: I0121 13:29:35.928534 4765 scope.go:117] "RemoveContainer" containerID="52a73a97a1ecdfd1ac850c202d1c5dceca451c63ca1727f3dbdb20e40b76e014" Jan 21 13:29:35 crc kubenswrapper[4765]: I0121 13:29:35.967359 4765 scope.go:117] "RemoveContainer" containerID="cca2d583fed06e06f35eb259ace9a823748a14ecf0a7e29057f5b338721b0ad4" Jan 21 13:29:44 crc kubenswrapper[4765]: I0121 13:29:44.446066 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:29:44 crc kubenswrapper[4765]: I0121 13:29:44.446805 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:29:44 crc kubenswrapper[4765]: I0121 13:29:44.446867 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:29:44 crc kubenswrapper[4765]: I0121 13:29:44.447751 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:29:44 crc kubenswrapper[4765]: I0121 13:29:44.447813 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" gracePeriod=600 Jan 21 13:29:44 crc kubenswrapper[4765]: E0121 13:29:44.573715 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:29:44 crc kubenswrapper[4765]: I0121 13:29:44.706539 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" exitCode=0 Jan 21 13:29:44 crc kubenswrapper[4765]: I0121 13:29:44.706580 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0"} Jan 21 13:29:44 crc kubenswrapper[4765]: I0121 13:29:44.706615 4765 scope.go:117] "RemoveContainer" containerID="6c509a513e1ebf6d2d06160d429b88c481004be78e418699ef3864eb908e3f4c" Jan 21 13:29:44 crc kubenswrapper[4765]: I0121 13:29:44.708027 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:29:44 crc kubenswrapper[4765]: E0121 13:29:44.708710 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:29:58 crc kubenswrapper[4765]: I0121 13:29:58.614187 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:29:58 crc kubenswrapper[4765]: E0121 13:29:58.615051 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.157570 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6"] Jan 21 13:30:00 crc kubenswrapper[4765]: E0121 13:30:00.158579 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5a1c54-b991-4ef4-865e-ea4fec88cd34" containerName="extract-content" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.158620 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5a1c54-b991-4ef4-865e-ea4fec88cd34" containerName="extract-content" Jan 21 13:30:00 crc kubenswrapper[4765]: E0121 13:30:00.158727 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5a1c54-b991-4ef4-865e-ea4fec88cd34" containerName="extract-utilities" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.158740 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5a1c54-b991-4ef4-865e-ea4fec88cd34" containerName="extract-utilities" Jan 21 13:30:00 crc kubenswrapper[4765]: E0121 13:30:00.158760 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c5a1c54-b991-4ef4-865e-ea4fec88cd34" containerName="registry-server" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.158768 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c5a1c54-b991-4ef4-865e-ea4fec88cd34" containerName="registry-server" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.159170 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c5a1c54-b991-4ef4-865e-ea4fec88cd34" containerName="registry-server" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.160388 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.164204 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.164635 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.182607 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6"] Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.246733 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2768b478-fd48-4851-8fa3-2b728baccd76-secret-volume\") pod \"collect-profiles-29483370-frrv6\" (UID: \"2768b478-fd48-4851-8fa3-2b728baccd76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.247118 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2768b478-fd48-4851-8fa3-2b728baccd76-config-volume\") pod \"collect-profiles-29483370-frrv6\" (UID: \"2768b478-fd48-4851-8fa3-2b728baccd76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.247331 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmw6c\" (UniqueName: \"kubernetes.io/projected/2768b478-fd48-4851-8fa3-2b728baccd76-kube-api-access-tmw6c\") pod \"collect-profiles-29483370-frrv6\" (UID: \"2768b478-fd48-4851-8fa3-2b728baccd76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.349429 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2768b478-fd48-4851-8fa3-2b728baccd76-secret-volume\") pod \"collect-profiles-29483370-frrv6\" (UID: \"2768b478-fd48-4851-8fa3-2b728baccd76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.349528 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2768b478-fd48-4851-8fa3-2b728baccd76-config-volume\") pod \"collect-profiles-29483370-frrv6\" (UID: \"2768b478-fd48-4851-8fa3-2b728baccd76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.349606 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmw6c\" (UniqueName: \"kubernetes.io/projected/2768b478-fd48-4851-8fa3-2b728baccd76-kube-api-access-tmw6c\") pod \"collect-profiles-29483370-frrv6\" (UID: \"2768b478-fd48-4851-8fa3-2b728baccd76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.350588 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2768b478-fd48-4851-8fa3-2b728baccd76-config-volume\") pod \"collect-profiles-29483370-frrv6\" (UID: \"2768b478-fd48-4851-8fa3-2b728baccd76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.355509 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2768b478-fd48-4851-8fa3-2b728baccd76-secret-volume\") pod \"collect-profiles-29483370-frrv6\" (UID: \"2768b478-fd48-4851-8fa3-2b728baccd76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.373868 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmw6c\" (UniqueName: \"kubernetes.io/projected/2768b478-fd48-4851-8fa3-2b728baccd76-kube-api-access-tmw6c\") pod \"collect-profiles-29483370-frrv6\" (UID: \"2768b478-fd48-4851-8fa3-2b728baccd76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.482343 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" Jan 21 13:30:00 crc kubenswrapper[4765]: I0121 13:30:00.951363 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6"] Jan 21 13:30:01 crc kubenswrapper[4765]: I0121 13:30:01.892772 4765 generic.go:334] "Generic (PLEG): container finished" podID="2768b478-fd48-4851-8fa3-2b728baccd76" containerID="9a070c7c7c1e86b813794e568569290b7bb1f77420a2cbae773c4e1923e0e894" exitCode=0 Jan 21 13:30:01 crc kubenswrapper[4765]: I0121 13:30:01.892957 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" event={"ID":"2768b478-fd48-4851-8fa3-2b728baccd76","Type":"ContainerDied","Data":"9a070c7c7c1e86b813794e568569290b7bb1f77420a2cbae773c4e1923e0e894"} Jan 21 13:30:01 crc kubenswrapper[4765]: I0121 13:30:01.893105 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" event={"ID":"2768b478-fd48-4851-8fa3-2b728baccd76","Type":"ContainerStarted","Data":"cd7696194e25afedc728f739139fcfebaf5525c9d2edaca46b8ee1f5adc9fc41"} Jan 21 13:30:03 crc kubenswrapper[4765]: I0121 13:30:03.229023 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" Jan 21 13:30:03 crc kubenswrapper[4765]: I0121 13:30:03.326843 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2768b478-fd48-4851-8fa3-2b728baccd76-config-volume\") pod \"2768b478-fd48-4851-8fa3-2b728baccd76\" (UID: \"2768b478-fd48-4851-8fa3-2b728baccd76\") " Jan 21 13:30:03 crc kubenswrapper[4765]: I0121 13:30:03.327032 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmw6c\" (UniqueName: \"kubernetes.io/projected/2768b478-fd48-4851-8fa3-2b728baccd76-kube-api-access-tmw6c\") pod \"2768b478-fd48-4851-8fa3-2b728baccd76\" (UID: \"2768b478-fd48-4851-8fa3-2b728baccd76\") " Jan 21 13:30:03 crc kubenswrapper[4765]: I0121 13:30:03.327181 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2768b478-fd48-4851-8fa3-2b728baccd76-secret-volume\") pod \"2768b478-fd48-4851-8fa3-2b728baccd76\" (UID: \"2768b478-fd48-4851-8fa3-2b728baccd76\") " Jan 21 13:30:03 crc kubenswrapper[4765]: I0121 13:30:03.327645 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2768b478-fd48-4851-8fa3-2b728baccd76-config-volume" (OuterVolumeSpecName: "config-volume") pod "2768b478-fd48-4851-8fa3-2b728baccd76" (UID: "2768b478-fd48-4851-8fa3-2b728baccd76"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:30:03 crc kubenswrapper[4765]: I0121 13:30:03.333000 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2768b478-fd48-4851-8fa3-2b728baccd76-kube-api-access-tmw6c" (OuterVolumeSpecName: "kube-api-access-tmw6c") pod "2768b478-fd48-4851-8fa3-2b728baccd76" (UID: "2768b478-fd48-4851-8fa3-2b728baccd76"). InnerVolumeSpecName "kube-api-access-tmw6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:30:03 crc kubenswrapper[4765]: I0121 13:30:03.333255 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2768b478-fd48-4851-8fa3-2b728baccd76-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2768b478-fd48-4851-8fa3-2b728baccd76" (UID: "2768b478-fd48-4851-8fa3-2b728baccd76"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:03 crc kubenswrapper[4765]: I0121 13:30:03.429345 4765 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2768b478-fd48-4851-8fa3-2b728baccd76-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:03 crc kubenswrapper[4765]: I0121 13:30:03.429383 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmw6c\" (UniqueName: \"kubernetes.io/projected/2768b478-fd48-4851-8fa3-2b728baccd76-kube-api-access-tmw6c\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:03 crc kubenswrapper[4765]: I0121 13:30:03.429395 4765 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2768b478-fd48-4851-8fa3-2b728baccd76-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:03 crc kubenswrapper[4765]: I0121 13:30:03.911965 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" event={"ID":"2768b478-fd48-4851-8fa3-2b728baccd76","Type":"ContainerDied","Data":"cd7696194e25afedc728f739139fcfebaf5525c9d2edaca46b8ee1f5adc9fc41"} Jan 21 13:30:03 crc kubenswrapper[4765]: I0121 13:30:03.912002 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd7696194e25afedc728f739139fcfebaf5525c9d2edaca46b8ee1f5adc9fc41" Jan 21 13:30:03 crc kubenswrapper[4765]: I0121 13:30:03.912033 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6" Jan 21 13:30:11 crc kubenswrapper[4765]: I0121 13:30:11.613882 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:30:11 crc kubenswrapper[4765]: E0121 13:30:11.615898 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:30:21 crc kubenswrapper[4765]: I0121 13:30:21.731273 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rkkcs"] Jan 21 13:30:21 crc kubenswrapper[4765]: E0121 13:30:21.732531 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2768b478-fd48-4851-8fa3-2b728baccd76" containerName="collect-profiles" Jan 21 13:30:21 crc kubenswrapper[4765]: I0121 13:30:21.732547 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="2768b478-fd48-4851-8fa3-2b728baccd76" containerName="collect-profiles" Jan 21 13:30:21 crc kubenswrapper[4765]: I0121 13:30:21.732744 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="2768b478-fd48-4851-8fa3-2b728baccd76" containerName="collect-profiles" Jan 21 13:30:21 crc kubenswrapper[4765]: I0121 13:30:21.737292 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:21 crc kubenswrapper[4765]: I0121 13:30:21.788825 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkkcs"] Jan 21 13:30:21 crc kubenswrapper[4765]: I0121 13:30:21.860534 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a6ba503-6102-4e7c-8938-d15db6835cfe-catalog-content\") pod \"community-operators-rkkcs\" (UID: \"2a6ba503-6102-4e7c-8938-d15db6835cfe\") " pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:21 crc kubenswrapper[4765]: I0121 13:30:21.861201 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a6ba503-6102-4e7c-8938-d15db6835cfe-utilities\") pod \"community-operators-rkkcs\" (UID: \"2a6ba503-6102-4e7c-8938-d15db6835cfe\") " pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:21 crc kubenswrapper[4765]: I0121 13:30:21.861256 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9nn2\" (UniqueName: \"kubernetes.io/projected/2a6ba503-6102-4e7c-8938-d15db6835cfe-kube-api-access-q9nn2\") pod \"community-operators-rkkcs\" (UID: \"2a6ba503-6102-4e7c-8938-d15db6835cfe\") " pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:21 crc kubenswrapper[4765]: I0121 13:30:21.963783 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a6ba503-6102-4e7c-8938-d15db6835cfe-utilities\") pod \"community-operators-rkkcs\" (UID: \"2a6ba503-6102-4e7c-8938-d15db6835cfe\") " pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:21 crc kubenswrapper[4765]: I0121 13:30:21.963836 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9nn2\" (UniqueName: \"kubernetes.io/projected/2a6ba503-6102-4e7c-8938-d15db6835cfe-kube-api-access-q9nn2\") pod \"community-operators-rkkcs\" (UID: \"2a6ba503-6102-4e7c-8938-d15db6835cfe\") " pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:21 crc kubenswrapper[4765]: I0121 13:30:21.963910 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a6ba503-6102-4e7c-8938-d15db6835cfe-catalog-content\") pod \"community-operators-rkkcs\" (UID: \"2a6ba503-6102-4e7c-8938-d15db6835cfe\") " pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:21 crc kubenswrapper[4765]: I0121 13:30:21.964460 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a6ba503-6102-4e7c-8938-d15db6835cfe-catalog-content\") pod \"community-operators-rkkcs\" (UID: \"2a6ba503-6102-4e7c-8938-d15db6835cfe\") " pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:21 crc kubenswrapper[4765]: I0121 13:30:21.964484 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a6ba503-6102-4e7c-8938-d15db6835cfe-utilities\") pod \"community-operators-rkkcs\" (UID: \"2a6ba503-6102-4e7c-8938-d15db6835cfe\") " pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:21 crc kubenswrapper[4765]: I0121 13:30:21.990492 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9nn2\" (UniqueName: \"kubernetes.io/projected/2a6ba503-6102-4e7c-8938-d15db6835cfe-kube-api-access-q9nn2\") pod \"community-operators-rkkcs\" (UID: \"2a6ba503-6102-4e7c-8938-d15db6835cfe\") " pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:22 crc kubenswrapper[4765]: I0121 13:30:22.055737 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:22 crc kubenswrapper[4765]: I0121 13:30:22.585075 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkkcs"] Jan 21 13:30:22 crc kubenswrapper[4765]: W0121 13:30:22.585779 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a6ba503_6102_4e7c_8938_d15db6835cfe.slice/crio-92b3b7efa1b2aa32ece32ad6c55e6c3b30ca5877830cade1351e9c7f42c8c521 WatchSource:0}: Error finding container 92b3b7efa1b2aa32ece32ad6c55e6c3b30ca5877830cade1351e9c7f42c8c521: Status 404 returned error can't find the container with id 92b3b7efa1b2aa32ece32ad6c55e6c3b30ca5877830cade1351e9c7f42c8c521 Jan 21 13:30:23 crc kubenswrapper[4765]: I0121 13:30:23.105506 4765 generic.go:334] "Generic (PLEG): container finished" podID="2a6ba503-6102-4e7c-8938-d15db6835cfe" containerID="40feec8e1251da1125141afc9e8b705221502a28a8d14806df948ce1ea45a528" exitCode=0 Jan 21 13:30:23 crc kubenswrapper[4765]: I0121 13:30:23.105560 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkkcs" event={"ID":"2a6ba503-6102-4e7c-8938-d15db6835cfe","Type":"ContainerDied","Data":"40feec8e1251da1125141afc9e8b705221502a28a8d14806df948ce1ea45a528"} Jan 21 13:30:23 crc kubenswrapper[4765]: I0121 13:30:23.106927 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkkcs" event={"ID":"2a6ba503-6102-4e7c-8938-d15db6835cfe","Type":"ContainerStarted","Data":"92b3b7efa1b2aa32ece32ad6c55e6c3b30ca5877830cade1351e9c7f42c8c521"} Jan 21 13:30:23 crc kubenswrapper[4765]: I0121 13:30:23.109972 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:30:25 crc kubenswrapper[4765]: I0121 13:30:25.132916 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkkcs" event={"ID":"2a6ba503-6102-4e7c-8938-d15db6835cfe","Type":"ContainerStarted","Data":"485f98ef7020f7742175cefbbb2988a4decee0c2206e00bfbfd690b6b6a1fb84"} Jan 21 13:30:26 crc kubenswrapper[4765]: I0121 13:30:26.144421 4765 generic.go:334] "Generic (PLEG): container finished" podID="2a6ba503-6102-4e7c-8938-d15db6835cfe" containerID="485f98ef7020f7742175cefbbb2988a4decee0c2206e00bfbfd690b6b6a1fb84" exitCode=0 Jan 21 13:30:26 crc kubenswrapper[4765]: I0121 13:30:26.144489 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkkcs" event={"ID":"2a6ba503-6102-4e7c-8938-d15db6835cfe","Type":"ContainerDied","Data":"485f98ef7020f7742175cefbbb2988a4decee0c2206e00bfbfd690b6b6a1fb84"} Jan 21 13:30:26 crc kubenswrapper[4765]: I0121 13:30:26.614599 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:30:26 crc kubenswrapper[4765]: E0121 13:30:26.614881 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:30:27 crc kubenswrapper[4765]: I0121 13:30:27.155948 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkkcs" event={"ID":"2a6ba503-6102-4e7c-8938-d15db6835cfe","Type":"ContainerStarted","Data":"173bab021c6cd3990d282f8139d67e50ddb76854bb67c8683da2f69244060265"} Jan 21 13:30:27 crc kubenswrapper[4765]: I0121 13:30:27.182424 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rkkcs" podStartSLOduration=2.72709981 podStartE2EDuration="6.182402821s" podCreationTimestamp="2026-01-21 13:30:21 +0000 UTC" firstStartedPulling="2026-01-21 13:30:23.109712911 +0000 UTC m=+1684.127438743" lastFinishedPulling="2026-01-21 13:30:26.565015932 +0000 UTC m=+1687.582741754" observedRunningTime="2026-01-21 13:30:27.175392008 +0000 UTC m=+1688.193117820" watchObservedRunningTime="2026-01-21 13:30:27.182402821 +0000 UTC m=+1688.200128643" Jan 21 13:30:32 crc kubenswrapper[4765]: I0121 13:30:32.056613 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:32 crc kubenswrapper[4765]: I0121 13:30:32.057465 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:32 crc kubenswrapper[4765]: I0121 13:30:32.105679 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:32 crc kubenswrapper[4765]: I0121 13:30:32.250609 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:32 crc kubenswrapper[4765]: I0121 13:30:32.349335 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkkcs"] Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.220844 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rkkcs" podUID="2a6ba503-6102-4e7c-8938-d15db6835cfe" containerName="registry-server" containerID="cri-o://173bab021c6cd3990d282f8139d67e50ddb76854bb67c8683da2f69244060265" gracePeriod=2 Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.646354 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.757893 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7zj5z"] Jan 21 13:30:34 crc kubenswrapper[4765]: E0121 13:30:34.759426 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6ba503-6102-4e7c-8938-d15db6835cfe" containerName="extract-content" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.759457 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6ba503-6102-4e7c-8938-d15db6835cfe" containerName="extract-content" Jan 21 13:30:34 crc kubenswrapper[4765]: E0121 13:30:34.759479 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6ba503-6102-4e7c-8938-d15db6835cfe" containerName="extract-utilities" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.759489 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6ba503-6102-4e7c-8938-d15db6835cfe" containerName="extract-utilities" Jan 21 13:30:34 crc kubenswrapper[4765]: E0121 13:30:34.759516 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6ba503-6102-4e7c-8938-d15db6835cfe" containerName="registry-server" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.759525 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6ba503-6102-4e7c-8938-d15db6835cfe" containerName="registry-server" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.759744 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a6ba503-6102-4e7c-8938-d15db6835cfe" containerName="registry-server" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.761141 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.771798 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a6ba503-6102-4e7c-8938-d15db6835cfe-catalog-content\") pod \"2a6ba503-6102-4e7c-8938-d15db6835cfe\" (UID: \"2a6ba503-6102-4e7c-8938-d15db6835cfe\") " Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.772348 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a6ba503-6102-4e7c-8938-d15db6835cfe-utilities\") pod \"2a6ba503-6102-4e7c-8938-d15db6835cfe\" (UID: \"2a6ba503-6102-4e7c-8938-d15db6835cfe\") " Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.772428 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9nn2\" (UniqueName: \"kubernetes.io/projected/2a6ba503-6102-4e7c-8938-d15db6835cfe-kube-api-access-q9nn2\") pod \"2a6ba503-6102-4e7c-8938-d15db6835cfe\" (UID: \"2a6ba503-6102-4e7c-8938-d15db6835cfe\") " Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.774670 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7zj5z"] Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.777974 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a6ba503-6102-4e7c-8938-d15db6835cfe-utilities" (OuterVolumeSpecName: "utilities") pod "2a6ba503-6102-4e7c-8938-d15db6835cfe" (UID: "2a6ba503-6102-4e7c-8938-d15db6835cfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.792060 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6ba503-6102-4e7c-8938-d15db6835cfe-kube-api-access-q9nn2" (OuterVolumeSpecName: "kube-api-access-q9nn2") pod "2a6ba503-6102-4e7c-8938-d15db6835cfe" (UID: "2a6ba503-6102-4e7c-8938-d15db6835cfe"). InnerVolumeSpecName "kube-api-access-q9nn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.856461 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a6ba503-6102-4e7c-8938-d15db6835cfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a6ba503-6102-4e7c-8938-d15db6835cfe" (UID: "2a6ba503-6102-4e7c-8938-d15db6835cfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.875181 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23c3f3dc-940c-4df7-a413-752d7f81721e-catalog-content\") pod \"certified-operators-7zj5z\" (UID: \"23c3f3dc-940c-4df7-a413-752d7f81721e\") " pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.875517 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23c3f3dc-940c-4df7-a413-752d7f81721e-utilities\") pod \"certified-operators-7zj5z\" (UID: \"23c3f3dc-940c-4df7-a413-752d7f81721e\") " pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.875658 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qptw4\" (UniqueName: \"kubernetes.io/projected/23c3f3dc-940c-4df7-a413-752d7f81721e-kube-api-access-qptw4\") pod \"certified-operators-7zj5z\" (UID: \"23c3f3dc-940c-4df7-a413-752d7f81721e\") " pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.876117 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9nn2\" (UniqueName: \"kubernetes.io/projected/2a6ba503-6102-4e7c-8938-d15db6835cfe-kube-api-access-q9nn2\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.876159 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a6ba503-6102-4e7c-8938-d15db6835cfe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.876173 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a6ba503-6102-4e7c-8938-d15db6835cfe-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.979360 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23c3f3dc-940c-4df7-a413-752d7f81721e-catalog-content\") pod \"certified-operators-7zj5z\" (UID: \"23c3f3dc-940c-4df7-a413-752d7f81721e\") " pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.979808 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23c3f3dc-940c-4df7-a413-752d7f81721e-utilities\") pod \"certified-operators-7zj5z\" (UID: \"23c3f3dc-940c-4df7-a413-752d7f81721e\") " pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.980028 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qptw4\" (UniqueName: \"kubernetes.io/projected/23c3f3dc-940c-4df7-a413-752d7f81721e-kube-api-access-qptw4\") pod \"certified-operators-7zj5z\" (UID: \"23c3f3dc-940c-4df7-a413-752d7f81721e\") " pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.981429 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23c3f3dc-940c-4df7-a413-752d7f81721e-catalog-content\") pod \"certified-operators-7zj5z\" (UID: \"23c3f3dc-940c-4df7-a413-752d7f81721e\") " pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:34 crc kubenswrapper[4765]: I0121 13:30:34.981908 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23c3f3dc-940c-4df7-a413-752d7f81721e-utilities\") pod \"certified-operators-7zj5z\" (UID: \"23c3f3dc-940c-4df7-a413-752d7f81721e\") " pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.001720 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qptw4\" (UniqueName: \"kubernetes.io/projected/23c3f3dc-940c-4df7-a413-752d7f81721e-kube-api-access-qptw4\") pod \"certified-operators-7zj5z\" (UID: \"23c3f3dc-940c-4df7-a413-752d7f81721e\") " pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.082153 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.231401 4765 generic.go:334] "Generic (PLEG): container finished" podID="2a6ba503-6102-4e7c-8938-d15db6835cfe" containerID="173bab021c6cd3990d282f8139d67e50ddb76854bb67c8683da2f69244060265" exitCode=0 Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.231454 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkkcs" event={"ID":"2a6ba503-6102-4e7c-8938-d15db6835cfe","Type":"ContainerDied","Data":"173bab021c6cd3990d282f8139d67e50ddb76854bb67c8683da2f69244060265"} Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.231490 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkkcs" event={"ID":"2a6ba503-6102-4e7c-8938-d15db6835cfe","Type":"ContainerDied","Data":"92b3b7efa1b2aa32ece32ad6c55e6c3b30ca5877830cade1351e9c7f42c8c521"} Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.231512 4765 scope.go:117] "RemoveContainer" containerID="173bab021c6cd3990d282f8139d67e50ddb76854bb67c8683da2f69244060265" Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.231646 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkkcs" Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.285380 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkkcs"] Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.296555 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rkkcs"] Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.305376 4765 scope.go:117] "RemoveContainer" containerID="485f98ef7020f7742175cefbbb2988a4decee0c2206e00bfbfd690b6b6a1fb84" Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.353175 4765 scope.go:117] "RemoveContainer" containerID="40feec8e1251da1125141afc9e8b705221502a28a8d14806df948ce1ea45a528" Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.406061 4765 scope.go:117] "RemoveContainer" containerID="173bab021c6cd3990d282f8139d67e50ddb76854bb67c8683da2f69244060265" Jan 21 13:30:35 crc kubenswrapper[4765]: E0121 13:30:35.407154 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"173bab021c6cd3990d282f8139d67e50ddb76854bb67c8683da2f69244060265\": container with ID starting with 173bab021c6cd3990d282f8139d67e50ddb76854bb67c8683da2f69244060265 not found: ID does not exist" containerID="173bab021c6cd3990d282f8139d67e50ddb76854bb67c8683da2f69244060265" Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.407269 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"173bab021c6cd3990d282f8139d67e50ddb76854bb67c8683da2f69244060265"} err="failed to get container status \"173bab021c6cd3990d282f8139d67e50ddb76854bb67c8683da2f69244060265\": rpc error: code = NotFound desc = could not find container \"173bab021c6cd3990d282f8139d67e50ddb76854bb67c8683da2f69244060265\": container with ID starting with 173bab021c6cd3990d282f8139d67e50ddb76854bb67c8683da2f69244060265 not found: ID does not exist" Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.407296 4765 scope.go:117] "RemoveContainer" containerID="485f98ef7020f7742175cefbbb2988a4decee0c2206e00bfbfd690b6b6a1fb84" Jan 21 13:30:35 crc kubenswrapper[4765]: E0121 13:30:35.407873 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"485f98ef7020f7742175cefbbb2988a4decee0c2206e00bfbfd690b6b6a1fb84\": container with ID starting with 485f98ef7020f7742175cefbbb2988a4decee0c2206e00bfbfd690b6b6a1fb84 not found: ID does not exist" containerID="485f98ef7020f7742175cefbbb2988a4decee0c2206e00bfbfd690b6b6a1fb84" Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.407898 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"485f98ef7020f7742175cefbbb2988a4decee0c2206e00bfbfd690b6b6a1fb84"} err="failed to get container status \"485f98ef7020f7742175cefbbb2988a4decee0c2206e00bfbfd690b6b6a1fb84\": rpc error: code = NotFound desc = could not find container \"485f98ef7020f7742175cefbbb2988a4decee0c2206e00bfbfd690b6b6a1fb84\": container with ID starting with 485f98ef7020f7742175cefbbb2988a4decee0c2206e00bfbfd690b6b6a1fb84 not found: ID does not exist" Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.407914 4765 scope.go:117] "RemoveContainer" containerID="40feec8e1251da1125141afc9e8b705221502a28a8d14806df948ce1ea45a528" Jan 21 13:30:35 crc kubenswrapper[4765]: E0121 13:30:35.408347 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40feec8e1251da1125141afc9e8b705221502a28a8d14806df948ce1ea45a528\": container with ID starting with 40feec8e1251da1125141afc9e8b705221502a28a8d14806df948ce1ea45a528 not found: ID does not exist" containerID="40feec8e1251da1125141afc9e8b705221502a28a8d14806df948ce1ea45a528" Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.408367 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40feec8e1251da1125141afc9e8b705221502a28a8d14806df948ce1ea45a528"} err="failed to get container status \"40feec8e1251da1125141afc9e8b705221502a28a8d14806df948ce1ea45a528\": rpc error: code = NotFound desc = could not find container \"40feec8e1251da1125141afc9e8b705221502a28a8d14806df948ce1ea45a528\": container with ID starting with 40feec8e1251da1125141afc9e8b705221502a28a8d14806df948ce1ea45a528 not found: ID does not exist" Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.549073 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7zj5z"] Jan 21 13:30:35 crc kubenswrapper[4765]: I0121 13:30:35.625037 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a6ba503-6102-4e7c-8938-d15db6835cfe" path="/var/lib/kubelet/pods/2a6ba503-6102-4e7c-8938-d15db6835cfe/volumes" Jan 21 13:30:36 crc kubenswrapper[4765]: I0121 13:30:36.086355 4765 scope.go:117] "RemoveContainer" containerID="f824fc8b37baf987c95380beb2b679b54e84b2154bb3b3a6bd1202d3b35635f6" Jan 21 13:30:36 crc kubenswrapper[4765]: I0121 13:30:36.105081 4765 scope.go:117] "RemoveContainer" containerID="931ac5bdd0611358ff04b89c6cad124fdeb5af3905540ea18feccd137719879f" Jan 21 13:30:36 crc kubenswrapper[4765]: I0121 13:30:36.242072 4765 generic.go:334] "Generic (PLEG): container finished" podID="23c3f3dc-940c-4df7-a413-752d7f81721e" containerID="9e4ae208495bb14c2aa8d5ca8a34f771530dbbe4aa585cf8a7849b063f9069bc" exitCode=0 Jan 21 13:30:36 crc kubenswrapper[4765]: I0121 13:30:36.242138 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zj5z" event={"ID":"23c3f3dc-940c-4df7-a413-752d7f81721e","Type":"ContainerDied","Data":"9e4ae208495bb14c2aa8d5ca8a34f771530dbbe4aa585cf8a7849b063f9069bc"} Jan 21 13:30:36 crc kubenswrapper[4765]: I0121 13:30:36.242178 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zj5z" event={"ID":"23c3f3dc-940c-4df7-a413-752d7f81721e","Type":"ContainerStarted","Data":"13a497a9ac371ea0cee571f4f990ca32639f9878029628c7fd4ff85bd4ce5466"} Jan 21 13:30:37 crc kubenswrapper[4765]: I0121 13:30:37.272449 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zj5z" event={"ID":"23c3f3dc-940c-4df7-a413-752d7f81721e","Type":"ContainerStarted","Data":"88c0dcbb754de1129a4bf75ede3bf0a50c8fc6a1d0e081b52c2cbba1d683d82e"} Jan 21 13:30:39 crc kubenswrapper[4765]: I0121 13:30:39.299017 4765 generic.go:334] "Generic (PLEG): container finished" podID="23c3f3dc-940c-4df7-a413-752d7f81721e" containerID="88c0dcbb754de1129a4bf75ede3bf0a50c8fc6a1d0e081b52c2cbba1d683d82e" exitCode=0 Jan 21 13:30:39 crc kubenswrapper[4765]: I0121 13:30:39.299130 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zj5z" event={"ID":"23c3f3dc-940c-4df7-a413-752d7f81721e","Type":"ContainerDied","Data":"88c0dcbb754de1129a4bf75ede3bf0a50c8fc6a1d0e081b52c2cbba1d683d82e"} Jan 21 13:30:40 crc kubenswrapper[4765]: I0121 13:30:40.313200 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zj5z" event={"ID":"23c3f3dc-940c-4df7-a413-752d7f81721e","Type":"ContainerStarted","Data":"77b885c0b061dfb61f69580ff26cf650ce95a5744c06a29001af10a5e1b503fa"} Jan 21 13:30:40 crc kubenswrapper[4765]: I0121 13:30:40.336134 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7zj5z" podStartSLOduration=2.892068121 podStartE2EDuration="6.336108166s" podCreationTimestamp="2026-01-21 13:30:34 +0000 UTC" firstStartedPulling="2026-01-21 13:30:36.253760846 +0000 UTC m=+1697.271486668" lastFinishedPulling="2026-01-21 13:30:39.697800891 +0000 UTC m=+1700.715526713" observedRunningTime="2026-01-21 13:30:40.32969037 +0000 UTC m=+1701.347416192" watchObservedRunningTime="2026-01-21 13:30:40.336108166 +0000 UTC m=+1701.353833988" Jan 21 13:30:40 crc kubenswrapper[4765]: I0121 13:30:40.614584 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:30:40 crc kubenswrapper[4765]: E0121 13:30:40.614906 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:30:45 crc kubenswrapper[4765]: I0121 13:30:45.082283 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:45 crc kubenswrapper[4765]: I0121 13:30:45.082631 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:45 crc kubenswrapper[4765]: I0121 13:30:45.142765 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:45 crc kubenswrapper[4765]: I0121 13:30:45.404218 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:45 crc kubenswrapper[4765]: I0121 13:30:45.459376 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7zj5z"] Jan 21 13:30:47 crc kubenswrapper[4765]: I0121 13:30:47.376518 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7zj5z" podUID="23c3f3dc-940c-4df7-a413-752d7f81721e" containerName="registry-server" containerID="cri-o://77b885c0b061dfb61f69580ff26cf650ce95a5744c06a29001af10a5e1b503fa" gracePeriod=2 Jan 21 13:30:47 crc kubenswrapper[4765]: I0121 13:30:47.884232 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.047323 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23c3f3dc-940c-4df7-a413-752d7f81721e-utilities\") pod \"23c3f3dc-940c-4df7-a413-752d7f81721e\" (UID: \"23c3f3dc-940c-4df7-a413-752d7f81721e\") " Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.047467 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qptw4\" (UniqueName: \"kubernetes.io/projected/23c3f3dc-940c-4df7-a413-752d7f81721e-kube-api-access-qptw4\") pod \"23c3f3dc-940c-4df7-a413-752d7f81721e\" (UID: \"23c3f3dc-940c-4df7-a413-752d7f81721e\") " Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.047689 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23c3f3dc-940c-4df7-a413-752d7f81721e-catalog-content\") pod \"23c3f3dc-940c-4df7-a413-752d7f81721e\" (UID: \"23c3f3dc-940c-4df7-a413-752d7f81721e\") " Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.055948 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c3f3dc-940c-4df7-a413-752d7f81721e-kube-api-access-qptw4" (OuterVolumeSpecName: "kube-api-access-qptw4") pod "23c3f3dc-940c-4df7-a413-752d7f81721e" (UID: "23c3f3dc-940c-4df7-a413-752d7f81721e"). InnerVolumeSpecName "kube-api-access-qptw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.069741 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23c3f3dc-940c-4df7-a413-752d7f81721e-utilities" (OuterVolumeSpecName: "utilities") pod "23c3f3dc-940c-4df7-a413-752d7f81721e" (UID: "23c3f3dc-940c-4df7-a413-752d7f81721e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.105135 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23c3f3dc-940c-4df7-a413-752d7f81721e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23c3f3dc-940c-4df7-a413-752d7f81721e" (UID: "23c3f3dc-940c-4df7-a413-752d7f81721e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.150713 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23c3f3dc-940c-4df7-a413-752d7f81721e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.150750 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23c3f3dc-940c-4df7-a413-752d7f81721e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.150763 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qptw4\" (UniqueName: \"kubernetes.io/projected/23c3f3dc-940c-4df7-a413-752d7f81721e-kube-api-access-qptw4\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.394366 4765 generic.go:334] "Generic (PLEG): container finished" podID="23c3f3dc-940c-4df7-a413-752d7f81721e" containerID="77b885c0b061dfb61f69580ff26cf650ce95a5744c06a29001af10a5e1b503fa" exitCode=0 Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.394444 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zj5z" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.394479 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zj5z" event={"ID":"23c3f3dc-940c-4df7-a413-752d7f81721e","Type":"ContainerDied","Data":"77b885c0b061dfb61f69580ff26cf650ce95a5744c06a29001af10a5e1b503fa"} Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.395564 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zj5z" event={"ID":"23c3f3dc-940c-4df7-a413-752d7f81721e","Type":"ContainerDied","Data":"13a497a9ac371ea0cee571f4f990ca32639f9878029628c7fd4ff85bd4ce5466"} Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.395591 4765 scope.go:117] "RemoveContainer" containerID="77b885c0b061dfb61f69580ff26cf650ce95a5744c06a29001af10a5e1b503fa" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.434172 4765 scope.go:117] "RemoveContainer" containerID="88c0dcbb754de1129a4bf75ede3bf0a50c8fc6a1d0e081b52c2cbba1d683d82e" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.461917 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7zj5z"] Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.467674 4765 scope.go:117] "RemoveContainer" containerID="9e4ae208495bb14c2aa8d5ca8a34f771530dbbe4aa585cf8a7849b063f9069bc" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.470930 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7zj5z"] Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.520037 4765 scope.go:117] "RemoveContainer" containerID="77b885c0b061dfb61f69580ff26cf650ce95a5744c06a29001af10a5e1b503fa" Jan 21 13:30:48 crc kubenswrapper[4765]: E0121 13:30:48.520598 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b885c0b061dfb61f69580ff26cf650ce95a5744c06a29001af10a5e1b503fa\": container with ID starting with 77b885c0b061dfb61f69580ff26cf650ce95a5744c06a29001af10a5e1b503fa not found: ID does not exist" containerID="77b885c0b061dfb61f69580ff26cf650ce95a5744c06a29001af10a5e1b503fa" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.520635 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b885c0b061dfb61f69580ff26cf650ce95a5744c06a29001af10a5e1b503fa"} err="failed to get container status \"77b885c0b061dfb61f69580ff26cf650ce95a5744c06a29001af10a5e1b503fa\": rpc error: code = NotFound desc = could not find container \"77b885c0b061dfb61f69580ff26cf650ce95a5744c06a29001af10a5e1b503fa\": container with ID starting with 77b885c0b061dfb61f69580ff26cf650ce95a5744c06a29001af10a5e1b503fa not found: ID does not exist" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.520660 4765 scope.go:117] "RemoveContainer" containerID="88c0dcbb754de1129a4bf75ede3bf0a50c8fc6a1d0e081b52c2cbba1d683d82e" Jan 21 13:30:48 crc kubenswrapper[4765]: E0121 13:30:48.520903 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88c0dcbb754de1129a4bf75ede3bf0a50c8fc6a1d0e081b52c2cbba1d683d82e\": container with ID starting with 88c0dcbb754de1129a4bf75ede3bf0a50c8fc6a1d0e081b52c2cbba1d683d82e not found: ID does not exist" containerID="88c0dcbb754de1129a4bf75ede3bf0a50c8fc6a1d0e081b52c2cbba1d683d82e" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.520929 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c0dcbb754de1129a4bf75ede3bf0a50c8fc6a1d0e081b52c2cbba1d683d82e"} err="failed to get container status \"88c0dcbb754de1129a4bf75ede3bf0a50c8fc6a1d0e081b52c2cbba1d683d82e\": rpc error: code = NotFound desc = could not find container \"88c0dcbb754de1129a4bf75ede3bf0a50c8fc6a1d0e081b52c2cbba1d683d82e\": container with ID starting with 88c0dcbb754de1129a4bf75ede3bf0a50c8fc6a1d0e081b52c2cbba1d683d82e not found: ID does not exist" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.520943 4765 scope.go:117] "RemoveContainer" containerID="9e4ae208495bb14c2aa8d5ca8a34f771530dbbe4aa585cf8a7849b063f9069bc" Jan 21 13:30:48 crc kubenswrapper[4765]: E0121 13:30:48.521133 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e4ae208495bb14c2aa8d5ca8a34f771530dbbe4aa585cf8a7849b063f9069bc\": container with ID starting with 9e4ae208495bb14c2aa8d5ca8a34f771530dbbe4aa585cf8a7849b063f9069bc not found: ID does not exist" containerID="9e4ae208495bb14c2aa8d5ca8a34f771530dbbe4aa585cf8a7849b063f9069bc" Jan 21 13:30:48 crc kubenswrapper[4765]: I0121 13:30:48.521157 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4ae208495bb14c2aa8d5ca8a34f771530dbbe4aa585cf8a7849b063f9069bc"} err="failed to get container status \"9e4ae208495bb14c2aa8d5ca8a34f771530dbbe4aa585cf8a7849b063f9069bc\": rpc error: code = NotFound desc = could not find container \"9e4ae208495bb14c2aa8d5ca8a34f771530dbbe4aa585cf8a7849b063f9069bc\": container with ID starting with 9e4ae208495bb14c2aa8d5ca8a34f771530dbbe4aa585cf8a7849b063f9069bc not found: ID does not exist" Jan 21 13:30:49 crc kubenswrapper[4765]: I0121 13:30:49.633927 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23c3f3dc-940c-4df7-a413-752d7f81721e" path="/var/lib/kubelet/pods/23c3f3dc-940c-4df7-a413-752d7f81721e/volumes" Jan 21 13:30:52 crc kubenswrapper[4765]: I0121 13:30:52.613685 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:30:52 crc kubenswrapper[4765]: E0121 13:30:52.614382 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:31:03 crc kubenswrapper[4765]: I0121 13:31:03.613333 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:31:03 crc kubenswrapper[4765]: E0121 13:31:03.613988 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:31:18 crc kubenswrapper[4765]: I0121 13:31:18.613611 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:31:18 crc kubenswrapper[4765]: E0121 13:31:18.614543 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:31:33 crc kubenswrapper[4765]: I0121 13:31:33.615145 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:31:33 crc kubenswrapper[4765]: E0121 13:31:33.616023 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:31:44 crc kubenswrapper[4765]: I0121 13:31:44.062333 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-252fs"] Jan 21 13:31:44 crc kubenswrapper[4765]: I0121 13:31:44.073649 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-252fs"] Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.049148 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-lnvld"] Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.061783 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8873-account-create-update-pdsft"] Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.097492 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-lnvld"] Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.100490 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8873-account-create-update-pdsft"] Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.110762 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c267-account-create-update-ptrxt"] Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.119782 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b8a1-account-create-update-4xk2c"] Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.129076 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c267-account-create-update-ptrxt"] Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.137554 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-vm5v2"] Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.150287 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b8a1-account-create-update-4xk2c"] Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.160078 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-vm5v2"] Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.613691 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:31:45 crc kubenswrapper[4765]: E0121 13:31:45.614088 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.624329 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="246155f7-f2f2-4bb9-a1c3-640933aa45c6" path="/var/lib/kubelet/pods/246155f7-f2f2-4bb9-a1c3-640933aa45c6/volumes" Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.625568 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69eb1c3c-fc0b-48c1-8151-052e16dbf92e" path="/var/lib/kubelet/pods/69eb1c3c-fc0b-48c1-8151-052e16dbf92e/volumes" Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.626485 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc79a4c-772c-44ae-9b50-a3893d199b48" path="/var/lib/kubelet/pods/6cc79a4c-772c-44ae-9b50-a3893d199b48/volumes" Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.627363 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88f4fc76-7416-4ac2-92b3-ef3649bbd6b1" path="/var/lib/kubelet/pods/88f4fc76-7416-4ac2-92b3-ef3649bbd6b1/volumes" Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.628810 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c79ed78f-7aba-4980-b043-0850084ef3e8" path="/var/lib/kubelet/pods/c79ed78f-7aba-4980-b043-0850084ef3e8/volumes" Jan 21 13:31:45 crc kubenswrapper[4765]: I0121 13:31:45.629889 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0d0673a-0e41-46e0-ab22-19b6e9cb522a" path="/var/lib/kubelet/pods/e0d0673a-0e41-46e0-ab22-19b6e9cb522a/volumes" Jan 21 13:31:56 crc kubenswrapper[4765]: I0121 13:31:56.614650 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:31:56 crc kubenswrapper[4765]: E0121 13:31:56.615763 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:32:04 crc kubenswrapper[4765]: I0121 13:32:04.040247 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9nwh5"] Jan 21 13:32:04 crc kubenswrapper[4765]: I0121 13:32:04.053251 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9nwh5"] Jan 21 13:32:05 crc kubenswrapper[4765]: I0121 13:32:05.626382 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ea0c05-6e45-48ca-a687-79c9e4cbc084" path="/var/lib/kubelet/pods/89ea0c05-6e45-48ca-a687-79c9e4cbc084/volumes" Jan 21 13:32:09 crc kubenswrapper[4765]: I0121 13:32:09.621655 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:32:09 crc kubenswrapper[4765]: E0121 13:32:09.622411 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:32:15 crc kubenswrapper[4765]: I0121 13:32:15.052187 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6h2b4"] Jan 21 13:32:15 crc kubenswrapper[4765]: I0121 13:32:15.065257 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6h2b4"] Jan 21 13:32:15 crc kubenswrapper[4765]: I0121 13:32:15.626982 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a300493b-663b-4b7e-b2b7-890abcca42dd" path="/var/lib/kubelet/pods/a300493b-663b-4b7e-b2b7-890abcca42dd/volumes" Jan 21 13:32:20 crc kubenswrapper[4765]: I0121 13:32:20.613953 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:32:20 crc kubenswrapper[4765]: E0121 13:32:20.615852 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:32:23 crc kubenswrapper[4765]: I0121 13:32:23.036873 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7kb2r"] Jan 21 13:32:23 crc kubenswrapper[4765]: I0121 13:32:23.047478 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7kb2r"] Jan 21 13:32:23 crc kubenswrapper[4765]: I0121 13:32:23.624848 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66ba0af3-6159-4e72-ab1d-f32955d76bfa" path="/var/lib/kubelet/pods/66ba0af3-6159-4e72-ab1d-f32955d76bfa/volumes" Jan 21 13:32:27 crc kubenswrapper[4765]: I0121 13:32:27.045099 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d337-account-create-update-t6njh"] Jan 21 13:32:27 crc kubenswrapper[4765]: I0121 13:32:27.054366 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a27a-account-create-update-9g4bh"] Jan 21 13:32:27 crc kubenswrapper[4765]: I0121 13:32:27.065973 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9f6c-account-create-update-g645s"] Jan 21 13:32:27 crc kubenswrapper[4765]: I0121 13:32:27.076658 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-nvsqn"] Jan 21 13:32:27 crc kubenswrapper[4765]: I0121 13:32:27.085035 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d337-account-create-update-t6njh"] Jan 21 13:32:27 crc kubenswrapper[4765]: I0121 13:32:27.096083 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-b6tzk"] Jan 21 13:32:27 crc kubenswrapper[4765]: I0121 13:32:27.105925 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a27a-account-create-update-9g4bh"] Jan 21 13:32:27 crc kubenswrapper[4765]: I0121 13:32:27.113894 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9f6c-account-create-update-g645s"] Jan 21 13:32:27 crc kubenswrapper[4765]: I0121 13:32:27.121988 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-nvsqn"] Jan 21 13:32:27 crc kubenswrapper[4765]: I0121 13:32:27.129791 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-b6tzk"] Jan 21 13:32:27 crc kubenswrapper[4765]: I0121 13:32:27.626034 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0107c27f-cb84-474c-8146-6fa6e03e0a8f" path="/var/lib/kubelet/pods/0107c27f-cb84-474c-8146-6fa6e03e0a8f/volumes" Jan 21 13:32:27 crc kubenswrapper[4765]: I0121 13:32:27.627397 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25d4e1e5-919d-44d0-9c5d-e238325d9c00" path="/var/lib/kubelet/pods/25d4e1e5-919d-44d0-9c5d-e238325d9c00/volumes" Jan 21 13:32:27 crc kubenswrapper[4765]: I0121 13:32:27.628182 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fae8415-8196-4888-90aa-8f40261530e4" path="/var/lib/kubelet/pods/8fae8415-8196-4888-90aa-8f40261530e4/volumes" Jan 21 13:32:27 crc kubenswrapper[4765]: I0121 13:32:27.628831 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed971171-a23f-4ef1-9eec-28d47864b08f" path="/var/lib/kubelet/pods/ed971171-a23f-4ef1-9eec-28d47864b08f/volumes" Jan 21 13:32:27 crc kubenswrapper[4765]: I0121 13:32:27.630356 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f76ba990-ea55-4459-8486-e413e80ba089" path="/var/lib/kubelet/pods/f76ba990-ea55-4459-8486-e413e80ba089/volumes" Jan 21 13:32:32 crc kubenswrapper[4765]: I0121 13:32:32.036413 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-tc4tf"] Jan 21 13:32:32 crc kubenswrapper[4765]: I0121 13:32:32.051191 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-tc4tf"] Jan 21 13:32:33 crc kubenswrapper[4765]: I0121 13:32:33.613373 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:32:33 crc kubenswrapper[4765]: E0121 13:32:33.613904 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:32:33 crc kubenswrapper[4765]: I0121 13:32:33.628901 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd0d3a4-9cce-49d9-9497-3398221354b0" path="/var/lib/kubelet/pods/4cd0d3a4-9cce-49d9-9497-3398221354b0/volumes" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.207613 4765 scope.go:117] "RemoveContainer" containerID="571898e2a863f5fb60fe91dbb6e313e258b379dffbc206b9b45eb650ae5be66e" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.229927 4765 scope.go:117] "RemoveContainer" containerID="7a6925cbacb6fa17e2dbdee171a4352fdc1e25bf4c1321794812ef5c210b2df4" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.285801 4765 scope.go:117] "RemoveContainer" containerID="e950c9cad8c1d622fe1bc87d455211fa3cc6a3110be9a529d75a92d672410921" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.326255 4765 scope.go:117] "RemoveContainer" containerID="efc2d686337a7743a9ab89363ac72d84fed522dc68b0602424269139851017fd" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.363072 4765 scope.go:117] "RemoveContainer" containerID="8e453dfa63abaf10e3e7ebb054d4727ffa0ac47c630488538022f1245c95da41" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.415435 4765 scope.go:117] "RemoveContainer" containerID="948ad17a8afdf2aa472006584031b56685d245f349f975028be9b3877f15f741" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.454229 4765 scope.go:117] "RemoveContainer" containerID="a399100f6c3d42ed941735929f76b7de5dfd450fd457fab3c1d245509bdaa616" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.490396 4765 scope.go:117] "RemoveContainer" containerID="9268a910527aca58f6d48420d5c50027c5628c24191123f9a85dcddd4ba58aa3" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.520693 4765 scope.go:117] "RemoveContainer" containerID="230fbe4ed5fb02480a0bd6797e535ddc1b0206bb7db062fa88bae0232e4c83a8" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.538552 4765 scope.go:117] "RemoveContainer" containerID="7fc7c3dea81a557f7ceb2189e386c92f82e9daee36dfda8a843b15af5104eb08" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.556598 4765 scope.go:117] "RemoveContainer" containerID="4c3791c73117b27994075d48e1bf97e583e9c9fc05b3e7943415fc23304ff092" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.577718 4765 scope.go:117] "RemoveContainer" containerID="d53e75ee06d2c86845426580212e68bdccaff55b84b5f8d258b6fe54e7debb03" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.601636 4765 scope.go:117] "RemoveContainer" containerID="9fa7c6b73f21e838816589ad4c9d85a7805eea241a59ca34be4aa103ee7feafd" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.642409 4765 scope.go:117] "RemoveContainer" containerID="06935b48d4bf43cb03e5aed3cfe863ead12d20d098c290ae9f2f4f46a891bd1a" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.661905 4765 scope.go:117] "RemoveContainer" containerID="2e1bf7f9019dfd452e2d23adf2e26d23645f273a7c75cfd3880b3711ea351390" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.685451 4765 scope.go:117] "RemoveContainer" containerID="eac1f8c5bce8f14d00e35df158711d3bff75eaee987c811b4b57febe1072b525" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.722147 4765 scope.go:117] "RemoveContainer" containerID="c880926176ca0ff1f48c4214168eb2527af70b4ab611a801029891161d140b6c" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.752323 4765 scope.go:117] "RemoveContainer" containerID="469412805a800458a07d2ecbccd810dd5fb80c2c4b175b0e1b61b6dd7653dc2b" Jan 21 13:32:36 crc kubenswrapper[4765]: I0121 13:32:36.781094 4765 scope.go:117] "RemoveContainer" containerID="5b90fa3b9fae42d7931aa71bf148ac1e58295958d5083b20ad8e9c4b80d234e7" Jan 21 13:32:46 crc kubenswrapper[4765]: I0121 13:32:46.613353 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:32:46 crc kubenswrapper[4765]: E0121 13:32:46.614319 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:33:00 crc kubenswrapper[4765]: I0121 13:33:00.613754 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:33:00 crc kubenswrapper[4765]: E0121 13:33:00.614583 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:33:12 crc kubenswrapper[4765]: I0121 13:33:12.614686 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:33:12 crc kubenswrapper[4765]: E0121 13:33:12.616546 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:33:13 crc kubenswrapper[4765]: I0121 13:33:13.048609 4765 generic.go:334] "Generic (PLEG): container finished" podID="244e5c68-a93a-44e7-a8fd-d4368ee754bd" containerID="0cdcaa0739f54246c903bb553d1b62c80e069d520f6b26851d5996567aa1371a" exitCode=0 Jan 21 13:33:13 crc kubenswrapper[4765]: I0121 13:33:13.048648 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" event={"ID":"244e5c68-a93a-44e7-a8fd-d4368ee754bd","Type":"ContainerDied","Data":"0cdcaa0739f54246c903bb553d1b62c80e069d520f6b26851d5996567aa1371a"} Jan 21 13:33:14 crc kubenswrapper[4765]: I0121 13:33:14.660832 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:33:14 crc kubenswrapper[4765]: I0121 13:33:14.800014 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-bootstrap-combined-ca-bundle\") pod \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " Jan 21 13:33:14 crc kubenswrapper[4765]: I0121 13:33:14.800129 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-inventory\") pod \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " Jan 21 13:33:14 crc kubenswrapper[4765]: I0121 13:33:14.800256 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-ssh-key-openstack-edpm-ipam\") pod \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " Jan 21 13:33:14 crc kubenswrapper[4765]: I0121 13:33:14.800324 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spxbn\" (UniqueName: \"kubernetes.io/projected/244e5c68-a93a-44e7-a8fd-d4368ee754bd-kube-api-access-spxbn\") pod \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\" (UID: \"244e5c68-a93a-44e7-a8fd-d4368ee754bd\") " Jan 21 13:33:14 crc kubenswrapper[4765]: I0121 13:33:14.808534 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/244e5c68-a93a-44e7-a8fd-d4368ee754bd-kube-api-access-spxbn" (OuterVolumeSpecName: "kube-api-access-spxbn") pod "244e5c68-a93a-44e7-a8fd-d4368ee754bd" (UID: "244e5c68-a93a-44e7-a8fd-d4368ee754bd"). InnerVolumeSpecName "kube-api-access-spxbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:33:14 crc kubenswrapper[4765]: I0121 13:33:14.811802 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "244e5c68-a93a-44e7-a8fd-d4368ee754bd" (UID: "244e5c68-a93a-44e7-a8fd-d4368ee754bd"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:14 crc kubenswrapper[4765]: I0121 13:33:14.829382 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "244e5c68-a93a-44e7-a8fd-d4368ee754bd" (UID: "244e5c68-a93a-44e7-a8fd-d4368ee754bd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:14 crc kubenswrapper[4765]: I0121 13:33:14.833007 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-inventory" (OuterVolumeSpecName: "inventory") pod "244e5c68-a93a-44e7-a8fd-d4368ee754bd" (UID: "244e5c68-a93a-44e7-a8fd-d4368ee754bd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:33:14 crc kubenswrapper[4765]: I0121 13:33:14.902024 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spxbn\" (UniqueName: \"kubernetes.io/projected/244e5c68-a93a-44e7-a8fd-d4368ee754bd-kube-api-access-spxbn\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:14 crc kubenswrapper[4765]: I0121 13:33:14.902166 4765 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:14 crc kubenswrapper[4765]: I0121 13:33:14.902246 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:14 crc kubenswrapper[4765]: I0121 13:33:14.902327 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/244e5c68-a93a-44e7-a8fd-d4368ee754bd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.036309 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-g87wz"] Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.043818 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-g87wz"] Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.068251 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" event={"ID":"244e5c68-a93a-44e7-a8fd-d4368ee754bd","Type":"ContainerDied","Data":"5b031172f3a56fd74a853282921a5c6311b7d801095ea2bda136ccf61f04c39a"} Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.068478 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b031172f3a56fd74a853282921a5c6311b7d801095ea2bda136ccf61f04c39a" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.068353 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.170313 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6"] Jan 21 13:33:15 crc kubenswrapper[4765]: E0121 13:33:15.171134 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c3f3dc-940c-4df7-a413-752d7f81721e" containerName="extract-content" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.171247 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c3f3dc-940c-4df7-a413-752d7f81721e" containerName="extract-content" Jan 21 13:33:15 crc kubenswrapper[4765]: E0121 13:33:15.171387 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c3f3dc-940c-4df7-a413-752d7f81721e" containerName="registry-server" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.171476 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c3f3dc-940c-4df7-a413-752d7f81721e" containerName="registry-server" Jan 21 13:33:15 crc kubenswrapper[4765]: E0121 13:33:15.171571 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="244e5c68-a93a-44e7-a8fd-d4368ee754bd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.171660 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="244e5c68-a93a-44e7-a8fd-d4368ee754bd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 13:33:15 crc kubenswrapper[4765]: E0121 13:33:15.171750 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c3f3dc-940c-4df7-a413-752d7f81721e" containerName="extract-utilities" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.171839 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c3f3dc-940c-4df7-a413-752d7f81721e" containerName="extract-utilities" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.172269 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c3f3dc-940c-4df7-a413-752d7f81721e" containerName="registry-server" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.172380 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="244e5c68-a93a-44e7-a8fd-d4368ee754bd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.173500 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.176037 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.176753 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.177078 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.177837 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.197146 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6"] Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.310355 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4frh\" (UniqueName: \"kubernetes.io/projected/1c7356a7-bab7-4123-9f98-a484d751e8e7-kube-api-access-c4frh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6\" (UID: \"1c7356a7-bab7-4123-9f98-a484d751e8e7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.310685 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c7356a7-bab7-4123-9f98-a484d751e8e7-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6\" (UID: \"1c7356a7-bab7-4123-9f98-a484d751e8e7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.310808 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c7356a7-bab7-4123-9f98-a484d751e8e7-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6\" (UID: \"1c7356a7-bab7-4123-9f98-a484d751e8e7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.412332 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c7356a7-bab7-4123-9f98-a484d751e8e7-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6\" (UID: \"1c7356a7-bab7-4123-9f98-a484d751e8e7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.412398 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c7356a7-bab7-4123-9f98-a484d751e8e7-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6\" (UID: \"1c7356a7-bab7-4123-9f98-a484d751e8e7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.412481 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4frh\" (UniqueName: \"kubernetes.io/projected/1c7356a7-bab7-4123-9f98-a484d751e8e7-kube-api-access-c4frh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6\" (UID: \"1c7356a7-bab7-4123-9f98-a484d751e8e7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.417311 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c7356a7-bab7-4123-9f98-a484d751e8e7-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6\" (UID: \"1c7356a7-bab7-4123-9f98-a484d751e8e7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.425645 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c7356a7-bab7-4123-9f98-a484d751e8e7-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6\" (UID: \"1c7356a7-bab7-4123-9f98-a484d751e8e7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.428483 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4frh\" (UniqueName: \"kubernetes.io/projected/1c7356a7-bab7-4123-9f98-a484d751e8e7-kube-api-access-c4frh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6\" (UID: \"1c7356a7-bab7-4123-9f98-a484d751e8e7\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.501437 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" Jan 21 13:33:15 crc kubenswrapper[4765]: I0121 13:33:15.628007 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5" path="/var/lib/kubelet/pods/2f11ea5c-ca3f-4188-85f5-ba8994e1a7a5/volumes" Jan 21 13:33:16 crc kubenswrapper[4765]: I0121 13:33:16.068041 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6"] Jan 21 13:33:17 crc kubenswrapper[4765]: I0121 13:33:17.086176 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" event={"ID":"1c7356a7-bab7-4123-9f98-a484d751e8e7","Type":"ContainerStarted","Data":"bbce5d868a5451c563aed757e754368db4e6d6fd83082e15c7056957f814d089"} Jan 21 13:33:17 crc kubenswrapper[4765]: I0121 13:33:17.087533 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" event={"ID":"1c7356a7-bab7-4123-9f98-a484d751e8e7","Type":"ContainerStarted","Data":"9ac53219ab617f3617f8a51ea985c9c9ad0cd8b9b5b8eb5f33cb63dd51781150"} Jan 21 13:33:17 crc kubenswrapper[4765]: I0121 13:33:17.111203 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" podStartSLOduration=1.698020481 podStartE2EDuration="2.111181905s" podCreationTimestamp="2026-01-21 13:33:15 +0000 UTC" firstStartedPulling="2026-01-21 13:33:16.087422369 +0000 UTC m=+1857.105148191" lastFinishedPulling="2026-01-21 13:33:16.500583783 +0000 UTC m=+1857.518309615" observedRunningTime="2026-01-21 13:33:17.106367936 +0000 UTC m=+1858.124093768" watchObservedRunningTime="2026-01-21 13:33:17.111181905 +0000 UTC m=+1858.128907727" Jan 21 13:33:23 crc kubenswrapper[4765]: I0121 13:33:23.614096 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:33:23 crc kubenswrapper[4765]: E0121 13:33:23.614883 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:33:33 crc kubenswrapper[4765]: I0121 13:33:33.071406 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-64bhp"] Jan 21 13:33:33 crc kubenswrapper[4765]: I0121 13:33:33.085516 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vftqw"] Jan 21 13:33:33 crc kubenswrapper[4765]: I0121 13:33:33.093410 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-64bhp"] Jan 21 13:33:33 crc kubenswrapper[4765]: I0121 13:33:33.102410 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vftqw"] Jan 21 13:33:33 crc kubenswrapper[4765]: I0121 13:33:33.110244 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-6m7js"] Jan 21 13:33:33 crc kubenswrapper[4765]: I0121 13:33:33.116691 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-6m7js"] Jan 21 13:33:33 crc kubenswrapper[4765]: I0121 13:33:33.624450 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e10ec1e-60c7-497a-bd8f-710c01db5b28" path="/var/lib/kubelet/pods/5e10ec1e-60c7-497a-bd8f-710c01db5b28/volumes" Jan 21 13:33:33 crc kubenswrapper[4765]: I0121 13:33:33.625907 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92340e7a-b249-4701-8527-eacaf9ba1fd7" path="/var/lib/kubelet/pods/92340e7a-b249-4701-8527-eacaf9ba1fd7/volumes" Jan 21 13:33:33 crc kubenswrapper[4765]: I0121 13:33:33.626896 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7141df0-548e-4699-8620-4d85ba1b1218" path="/var/lib/kubelet/pods/e7141df0-548e-4699-8620-4d85ba1b1218/volumes" Jan 21 13:33:37 crc kubenswrapper[4765]: I0121 13:33:37.065369 4765 scope.go:117] "RemoveContainer" containerID="47a49ca207f9a4bf97ff3c48d8898c3431b17fb94402ff80fbbb1c0681a6404a" Jan 21 13:33:37 crc kubenswrapper[4765]: I0121 13:33:37.094533 4765 scope.go:117] "RemoveContainer" containerID="236b62ff57085023bfd7faa978709fe7c1cf5b565a052ca93f0b06f9405fda16" Jan 21 13:33:37 crc kubenswrapper[4765]: I0121 13:33:37.147907 4765 scope.go:117] "RemoveContainer" containerID="2754f20aa9da36d9d9ca96314b11447ddef71b04750531d794dad3815e7e58e3" Jan 21 13:33:37 crc kubenswrapper[4765]: I0121 13:33:37.193653 4765 scope.go:117] "RemoveContainer" containerID="55a7211486ad090d246dd116d0b0b13604208a9504841e230e9d04aabbf7b482" Jan 21 13:33:38 crc kubenswrapper[4765]: I0121 13:33:38.614193 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:33:38 crc kubenswrapper[4765]: E0121 13:33:38.616659 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:33:49 crc kubenswrapper[4765]: I0121 13:33:49.613874 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:33:49 crc kubenswrapper[4765]: E0121 13:33:49.614708 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:33:56 crc kubenswrapper[4765]: I0121 13:33:56.048868 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-v4h97"] Jan 21 13:33:56 crc kubenswrapper[4765]: I0121 13:33:56.058994 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-v4h97"] Jan 21 13:33:57 crc kubenswrapper[4765]: I0121 13:33:57.625462 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f0ee201-f570-4414-9feb-616192dfca3b" path="/var/lib/kubelet/pods/3f0ee201-f570-4414-9feb-616192dfca3b/volumes" Jan 21 13:34:04 crc kubenswrapper[4765]: I0121 13:34:04.614149 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:34:04 crc kubenswrapper[4765]: E0121 13:34:04.615063 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:34:18 crc kubenswrapper[4765]: I0121 13:34:18.613916 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:34:18 crc kubenswrapper[4765]: E0121 13:34:18.622026 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:34:31 crc kubenswrapper[4765]: I0121 13:34:31.614374 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:34:31 crc kubenswrapper[4765]: E0121 13:34:31.615165 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:34:36 crc kubenswrapper[4765]: I0121 13:34:36.040853 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dgs4w"] Jan 21 13:34:36 crc kubenswrapper[4765]: I0121 13:34:36.048018 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5135-account-create-update-drt8l"] Jan 21 13:34:36 crc kubenswrapper[4765]: I0121 13:34:36.057114 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-5135-account-create-update-drt8l"] Jan 21 13:34:36 crc kubenswrapper[4765]: I0121 13:34:36.063834 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dgs4w"] Jan 21 13:34:37 crc kubenswrapper[4765]: I0121 13:34:37.036991 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6aaf-account-create-update-xmbm9"] Jan 21 13:34:37 crc kubenswrapper[4765]: I0121 13:34:37.048564 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-v7jnv"] Jan 21 13:34:37 crc kubenswrapper[4765]: I0121 13:34:37.057725 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-lq24p"] Jan 21 13:34:37 crc kubenswrapper[4765]: I0121 13:34:37.065010 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-v7jnv"] Jan 21 13:34:37 crc kubenswrapper[4765]: I0121 13:34:37.073676 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6aaf-account-create-update-xmbm9"] Jan 21 13:34:37 crc kubenswrapper[4765]: I0121 13:34:37.082311 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-lq24p"] Jan 21 13:34:37 crc kubenswrapper[4765]: I0121 13:34:37.089946 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-444e-account-create-update-4qk7x"] Jan 21 13:34:37 crc kubenswrapper[4765]: I0121 13:34:37.096692 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-444e-account-create-update-4qk7x"] Jan 21 13:34:37 crc kubenswrapper[4765]: I0121 13:34:37.313390 4765 scope.go:117] "RemoveContainer" containerID="e65d21496902b707aaddc6034d3b49f4e82bf1523f99b3f1e8975ce3badc470a" Jan 21 13:34:37 crc kubenswrapper[4765]: I0121 13:34:37.626273 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2291fe12-d21d-4050-9296-40984ce36fd3" path="/var/lib/kubelet/pods/2291fe12-d21d-4050-9296-40984ce36fd3/volumes" Jan 21 13:34:37 crc kubenswrapper[4765]: I0121 13:34:37.627362 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="299980e5-044a-4ee7-a28d-b11babd43597" path="/var/lib/kubelet/pods/299980e5-044a-4ee7-a28d-b11babd43597/volumes" Jan 21 13:34:37 crc kubenswrapper[4765]: I0121 13:34:37.628137 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62802099-90dc-4ca1-b480-5dd33b03a17d" path="/var/lib/kubelet/pods/62802099-90dc-4ca1-b480-5dd33b03a17d/volumes" Jan 21 13:34:37 crc kubenswrapper[4765]: I0121 13:34:37.629123 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d203539-6d7d-4db6-803f-c1954d20a55f" path="/var/lib/kubelet/pods/7d203539-6d7d-4db6-803f-c1954d20a55f/volumes" Jan 21 13:34:37 crc kubenswrapper[4765]: I0121 13:34:37.630564 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1d066a7-4634-4680-84b3-f5bb40d939f3" path="/var/lib/kubelet/pods/d1d066a7-4634-4680-84b3-f5bb40d939f3/volumes" Jan 21 13:34:37 crc kubenswrapper[4765]: I0121 13:34:37.631328 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c" path="/var/lib/kubelet/pods/e7dfe86c-0cf2-4d25-a9ad-0bff4543ec5c/volumes" Jan 21 13:34:42 crc kubenswrapper[4765]: I0121 13:34:42.614096 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:34:42 crc kubenswrapper[4765]: E0121 13:34:42.615833 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:34:55 crc kubenswrapper[4765]: I0121 13:34:55.613686 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:34:56 crc kubenswrapper[4765]: I0121 13:34:56.033462 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"8be9c6b30eac9194fe69597ddad7819ab0f25189067a0149bf0d2a68338af1f4"} Jan 21 13:35:20 crc kubenswrapper[4765]: I0121 13:35:20.043850 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2ks5c"] Jan 21 13:35:20 crc kubenswrapper[4765]: I0121 13:35:20.050584 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2ks5c"] Jan 21 13:35:21 crc kubenswrapper[4765]: I0121 13:35:21.626619 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cbd47b6-cd86-4ff3-a374-4863622fefad" path="/var/lib/kubelet/pods/3cbd47b6-cd86-4ff3-a374-4863622fefad/volumes" Jan 21 13:35:34 crc kubenswrapper[4765]: I0121 13:35:34.423224 4765 generic.go:334] "Generic (PLEG): container finished" podID="1c7356a7-bab7-4123-9f98-a484d751e8e7" containerID="bbce5d868a5451c563aed757e754368db4e6d6fd83082e15c7056957f814d089" exitCode=0 Jan 21 13:35:34 crc kubenswrapper[4765]: I0121 13:35:34.423262 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" event={"ID":"1c7356a7-bab7-4123-9f98-a484d751e8e7","Type":"ContainerDied","Data":"bbce5d868a5451c563aed757e754368db4e6d6fd83082e15c7056957f814d089"} Jan 21 13:35:35 crc kubenswrapper[4765]: I0121 13:35:35.829036 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" Jan 21 13:35:35 crc kubenswrapper[4765]: I0121 13:35:35.928152 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c7356a7-bab7-4123-9f98-a484d751e8e7-ssh-key-openstack-edpm-ipam\") pod \"1c7356a7-bab7-4123-9f98-a484d751e8e7\" (UID: \"1c7356a7-bab7-4123-9f98-a484d751e8e7\") " Jan 21 13:35:35 crc kubenswrapper[4765]: I0121 13:35:35.928309 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c7356a7-bab7-4123-9f98-a484d751e8e7-inventory\") pod \"1c7356a7-bab7-4123-9f98-a484d751e8e7\" (UID: \"1c7356a7-bab7-4123-9f98-a484d751e8e7\") " Jan 21 13:35:35 crc kubenswrapper[4765]: I0121 13:35:35.928365 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4frh\" (UniqueName: \"kubernetes.io/projected/1c7356a7-bab7-4123-9f98-a484d751e8e7-kube-api-access-c4frh\") pod \"1c7356a7-bab7-4123-9f98-a484d751e8e7\" (UID: \"1c7356a7-bab7-4123-9f98-a484d751e8e7\") " Jan 21 13:35:35 crc kubenswrapper[4765]: I0121 13:35:35.936829 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c7356a7-bab7-4123-9f98-a484d751e8e7-kube-api-access-c4frh" (OuterVolumeSpecName: "kube-api-access-c4frh") pod "1c7356a7-bab7-4123-9f98-a484d751e8e7" (UID: "1c7356a7-bab7-4123-9f98-a484d751e8e7"). InnerVolumeSpecName "kube-api-access-c4frh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:35:35 crc kubenswrapper[4765]: I0121 13:35:35.985134 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7356a7-bab7-4123-9f98-a484d751e8e7-inventory" (OuterVolumeSpecName: "inventory") pod "1c7356a7-bab7-4123-9f98-a484d751e8e7" (UID: "1c7356a7-bab7-4123-9f98-a484d751e8e7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:35:35 crc kubenswrapper[4765]: I0121 13:35:35.990782 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7356a7-bab7-4123-9f98-a484d751e8e7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1c7356a7-bab7-4123-9f98-a484d751e8e7" (UID: "1c7356a7-bab7-4123-9f98-a484d751e8e7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.031293 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c7356a7-bab7-4123-9f98-a484d751e8e7-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.031325 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4frh\" (UniqueName: \"kubernetes.io/projected/1c7356a7-bab7-4123-9f98-a484d751e8e7-kube-api-access-c4frh\") on node \"crc\" DevicePath \"\"" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.031336 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c7356a7-bab7-4123-9f98-a484d751e8e7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.444445 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" event={"ID":"1c7356a7-bab7-4123-9f98-a484d751e8e7","Type":"ContainerDied","Data":"9ac53219ab617f3617f8a51ea985c9c9ad0cd8b9b5b8eb5f33cb63dd51781150"} Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.444521 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ac53219ab617f3617f8a51ea985c9c9ad0cd8b9b5b8eb5f33cb63dd51781150" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.444559 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.545285 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh"] Jan 21 13:35:36 crc kubenswrapper[4765]: E0121 13:35:36.545906 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7356a7-bab7-4123-9f98-a484d751e8e7" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.546062 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7356a7-bab7-4123-9f98-a484d751e8e7" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.546433 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7356a7-bab7-4123-9f98-a484d751e8e7" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.547363 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.563666 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh"] Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.563824 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.563850 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.563890 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.565185 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.643892 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6275ee-1fe3-407a-b438-a189ac6b3241-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s77bh\" (UID: \"9a6275ee-1fe3-407a-b438-a189ac6b3241\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.644245 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2h44\" (UniqueName: \"kubernetes.io/projected/9a6275ee-1fe3-407a-b438-a189ac6b3241-kube-api-access-t2h44\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s77bh\" (UID: \"9a6275ee-1fe3-407a-b438-a189ac6b3241\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.644563 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6275ee-1fe3-407a-b438-a189ac6b3241-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s77bh\" (UID: \"9a6275ee-1fe3-407a-b438-a189ac6b3241\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.746854 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6275ee-1fe3-407a-b438-a189ac6b3241-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s77bh\" (UID: \"9a6275ee-1fe3-407a-b438-a189ac6b3241\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.747186 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2h44\" (UniqueName: \"kubernetes.io/projected/9a6275ee-1fe3-407a-b438-a189ac6b3241-kube-api-access-t2h44\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s77bh\" (UID: \"9a6275ee-1fe3-407a-b438-a189ac6b3241\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.747572 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6275ee-1fe3-407a-b438-a189ac6b3241-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s77bh\" (UID: \"9a6275ee-1fe3-407a-b438-a189ac6b3241\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.752878 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6275ee-1fe3-407a-b438-a189ac6b3241-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s77bh\" (UID: \"9a6275ee-1fe3-407a-b438-a189ac6b3241\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.752886 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6275ee-1fe3-407a-b438-a189ac6b3241-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s77bh\" (UID: \"9a6275ee-1fe3-407a-b438-a189ac6b3241\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.768856 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2h44\" (UniqueName: \"kubernetes.io/projected/9a6275ee-1fe3-407a-b438-a189ac6b3241-kube-api-access-t2h44\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-s77bh\" (UID: \"9a6275ee-1fe3-407a-b438-a189ac6b3241\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" Jan 21 13:35:36 crc kubenswrapper[4765]: I0121 13:35:36.875169 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" Jan 21 13:35:37 crc kubenswrapper[4765]: I0121 13:35:37.395016 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh"] Jan 21 13:35:37 crc kubenswrapper[4765]: W0121 13:35:37.400400 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a6275ee_1fe3_407a_b438_a189ac6b3241.slice/crio-db28d628f3fa254a407cff23e2dad0fc31016f8eb4e357da6763b10a47450610 WatchSource:0}: Error finding container db28d628f3fa254a407cff23e2dad0fc31016f8eb4e357da6763b10a47450610: Status 404 returned error can't find the container with id db28d628f3fa254a407cff23e2dad0fc31016f8eb4e357da6763b10a47450610 Jan 21 13:35:37 crc kubenswrapper[4765]: I0121 13:35:37.401720 4765 scope.go:117] "RemoveContainer" containerID="69158d1fe50f2ce7919262c6f0f42c64a0f0afa79f20bfc41ba3486cb4fec69c" Jan 21 13:35:37 crc kubenswrapper[4765]: I0121 13:35:37.402539 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:35:37 crc kubenswrapper[4765]: I0121 13:35:37.453871 4765 scope.go:117] "RemoveContainer" containerID="21262357282bbb7af9ffdfa83d9ad2025f6b0adf0775e91fab3bbb72d1a548a0" Jan 21 13:35:37 crc kubenswrapper[4765]: I0121 13:35:37.455817 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" event={"ID":"9a6275ee-1fe3-407a-b438-a189ac6b3241","Type":"ContainerStarted","Data":"db28d628f3fa254a407cff23e2dad0fc31016f8eb4e357da6763b10a47450610"} Jan 21 13:35:37 crc kubenswrapper[4765]: I0121 13:35:37.477882 4765 scope.go:117] "RemoveContainer" containerID="74ca9e66a2fe6aecacb06ccff97df34640d6ec03ef879628b10e0e3937f54f3f" Jan 21 13:35:37 crc kubenswrapper[4765]: I0121 13:35:37.524840 4765 scope.go:117] "RemoveContainer" containerID="75537113ffa7f9b977ff2c7e4e8e71e502ce21e5ed0c5d09a50950eaf45b6d8d" Jan 21 13:35:37 crc kubenswrapper[4765]: I0121 13:35:37.545057 4765 scope.go:117] "RemoveContainer" containerID="87c8805eaf388ddecbd04f74faeb5b5c293e75054835dd01cf0a21f3f6fe2adf" Jan 21 13:35:37 crc kubenswrapper[4765]: I0121 13:35:37.565716 4765 scope.go:117] "RemoveContainer" containerID="bbb972b66db43b6e66d81b1563e861b0dbbe0dfe33feb298cbaf499dc6d2f21f" Jan 21 13:35:37 crc kubenswrapper[4765]: I0121 13:35:37.587175 4765 scope.go:117] "RemoveContainer" containerID="d4afaa9160d8ad23fea28b505c38684dda32046b259c43ce3f46c093ae8fa356" Jan 21 13:35:38 crc kubenswrapper[4765]: I0121 13:35:38.469058 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" event={"ID":"9a6275ee-1fe3-407a-b438-a189ac6b3241","Type":"ContainerStarted","Data":"fcc18a9bf193e9cd51a05c4a3c01b9e055dd0659493ef9e6de6203136e5ff214"} Jan 21 13:35:38 crc kubenswrapper[4765]: I0121 13:35:38.495889 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" podStartSLOduration=2.036652073 podStartE2EDuration="2.495867478s" podCreationTimestamp="2026-01-21 13:35:36 +0000 UTC" firstStartedPulling="2026-01-21 13:35:37.402283607 +0000 UTC m=+1998.420009429" lastFinishedPulling="2026-01-21 13:35:37.861499012 +0000 UTC m=+1998.879224834" observedRunningTime="2026-01-21 13:35:38.49132539 +0000 UTC m=+1999.509051222" watchObservedRunningTime="2026-01-21 13:35:38.495867478 +0000 UTC m=+1999.513593310" Jan 21 13:35:50 crc kubenswrapper[4765]: I0121 13:35:50.063578 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-j6x8m"] Jan 21 13:35:50 crc kubenswrapper[4765]: I0121 13:35:50.073716 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-j6x8m"] Jan 21 13:35:51 crc kubenswrapper[4765]: I0121 13:35:51.032329 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fd76t"] Jan 21 13:35:51 crc kubenswrapper[4765]: I0121 13:35:51.042977 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fd76t"] Jan 21 13:35:51 crc kubenswrapper[4765]: I0121 13:35:51.626831 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc89bc6-a242-4876-bf76-4d93cbc8d55d" path="/var/lib/kubelet/pods/2dc89bc6-a242-4876-bf76-4d93cbc8d55d/volumes" Jan 21 13:35:51 crc kubenswrapper[4765]: I0121 13:35:51.628190 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ded68b6f-882a-4df2-afc6-760c969f9724" path="/var/lib/kubelet/pods/ded68b6f-882a-4df2-afc6-760c969f9724/volumes" Jan 21 13:36:36 crc kubenswrapper[4765]: I0121 13:36:36.064699 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-dkqn8"] Jan 21 13:36:36 crc kubenswrapper[4765]: I0121 13:36:36.082753 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-dkqn8"] Jan 21 13:36:37 crc kubenswrapper[4765]: I0121 13:36:37.629783 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ea33372-7a63-416b-a934-2f938cf0a212" path="/var/lib/kubelet/pods/9ea33372-7a63-416b-a934-2f938cf0a212/volumes" Jan 21 13:36:37 crc kubenswrapper[4765]: I0121 13:36:37.734906 4765 scope.go:117] "RemoveContainer" containerID="16f04dfe7f499fa61852a48d4680778d3746576e8f5979308d34b4da1b26aaa8" Jan 21 13:36:37 crc kubenswrapper[4765]: I0121 13:36:37.788146 4765 scope.go:117] "RemoveContainer" containerID="174f571d566dc44d071eb42dcfca229e44df5df09a299b3017dce551d67830a7" Jan 21 13:36:37 crc kubenswrapper[4765]: I0121 13:36:37.836497 4765 scope.go:117] "RemoveContainer" containerID="6f475475f6593d1407914c9b3427b6d9985ed2c31a852cf7f2809406d2ef4fbb" Jan 21 13:37:09 crc kubenswrapper[4765]: I0121 13:37:09.697342 4765 generic.go:334] "Generic (PLEG): container finished" podID="9a6275ee-1fe3-407a-b438-a189ac6b3241" containerID="fcc18a9bf193e9cd51a05c4a3c01b9e055dd0659493ef9e6de6203136e5ff214" exitCode=0 Jan 21 13:37:09 crc kubenswrapper[4765]: I0121 13:37:09.697537 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" event={"ID":"9a6275ee-1fe3-407a-b438-a189ac6b3241","Type":"ContainerDied","Data":"fcc18a9bf193e9cd51a05c4a3c01b9e055dd0659493ef9e6de6203136e5ff214"} Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.130272 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.255888 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6275ee-1fe3-407a-b438-a189ac6b3241-inventory\") pod \"9a6275ee-1fe3-407a-b438-a189ac6b3241\" (UID: \"9a6275ee-1fe3-407a-b438-a189ac6b3241\") " Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.256384 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2h44\" (UniqueName: \"kubernetes.io/projected/9a6275ee-1fe3-407a-b438-a189ac6b3241-kube-api-access-t2h44\") pod \"9a6275ee-1fe3-407a-b438-a189ac6b3241\" (UID: \"9a6275ee-1fe3-407a-b438-a189ac6b3241\") " Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.256463 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6275ee-1fe3-407a-b438-a189ac6b3241-ssh-key-openstack-edpm-ipam\") pod \"9a6275ee-1fe3-407a-b438-a189ac6b3241\" (UID: \"9a6275ee-1fe3-407a-b438-a189ac6b3241\") " Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.261474 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6275ee-1fe3-407a-b438-a189ac6b3241-kube-api-access-t2h44" (OuterVolumeSpecName: "kube-api-access-t2h44") pod "9a6275ee-1fe3-407a-b438-a189ac6b3241" (UID: "9a6275ee-1fe3-407a-b438-a189ac6b3241"). InnerVolumeSpecName "kube-api-access-t2h44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.283606 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6275ee-1fe3-407a-b438-a189ac6b3241-inventory" (OuterVolumeSpecName: "inventory") pod "9a6275ee-1fe3-407a-b438-a189ac6b3241" (UID: "9a6275ee-1fe3-407a-b438-a189ac6b3241"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.289426 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6275ee-1fe3-407a-b438-a189ac6b3241-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9a6275ee-1fe3-407a-b438-a189ac6b3241" (UID: "9a6275ee-1fe3-407a-b438-a189ac6b3241"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.359422 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a6275ee-1fe3-407a-b438-a189ac6b3241-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.359670 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2h44\" (UniqueName: \"kubernetes.io/projected/9a6275ee-1fe3-407a-b438-a189ac6b3241-kube-api-access-t2h44\") on node \"crc\" DevicePath \"\"" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.359738 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a6275ee-1fe3-407a-b438-a189ac6b3241-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.717326 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" event={"ID":"9a6275ee-1fe3-407a-b438-a189ac6b3241","Type":"ContainerDied","Data":"db28d628f3fa254a407cff23e2dad0fc31016f8eb4e357da6763b10a47450610"} Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.717575 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db28d628f3fa254a407cff23e2dad0fc31016f8eb4e357da6763b10a47450610" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.717393 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-s77bh" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.828915 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn"] Jan 21 13:37:11 crc kubenswrapper[4765]: E0121 13:37:11.829405 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6275ee-1fe3-407a-b438-a189ac6b3241" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.829448 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6275ee-1fe3-407a-b438-a189ac6b3241" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.829684 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6275ee-1fe3-407a-b438-a189ac6b3241" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.830372 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.832744 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.833358 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.833739 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.834120 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.854390 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn"] Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.970280 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f966a827-0001-4f9f-9600-072b24c50c9e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn\" (UID: \"f966a827-0001-4f9f-9600-072b24c50c9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.970365 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97xjh\" (UniqueName: \"kubernetes.io/projected/f966a827-0001-4f9f-9600-072b24c50c9e-kube-api-access-97xjh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn\" (UID: \"f966a827-0001-4f9f-9600-072b24c50c9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" Jan 21 13:37:11 crc kubenswrapper[4765]: I0121 13:37:11.970510 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f966a827-0001-4f9f-9600-072b24c50c9e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn\" (UID: \"f966a827-0001-4f9f-9600-072b24c50c9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" Jan 21 13:37:12 crc kubenswrapper[4765]: I0121 13:37:12.072526 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f966a827-0001-4f9f-9600-072b24c50c9e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn\" (UID: \"f966a827-0001-4f9f-9600-072b24c50c9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" Jan 21 13:37:12 crc kubenswrapper[4765]: I0121 13:37:12.072684 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f966a827-0001-4f9f-9600-072b24c50c9e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn\" (UID: \"f966a827-0001-4f9f-9600-072b24c50c9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" Jan 21 13:37:12 crc kubenswrapper[4765]: I0121 13:37:12.072745 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97xjh\" (UniqueName: \"kubernetes.io/projected/f966a827-0001-4f9f-9600-072b24c50c9e-kube-api-access-97xjh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn\" (UID: \"f966a827-0001-4f9f-9600-072b24c50c9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" Jan 21 13:37:12 crc kubenswrapper[4765]: I0121 13:37:12.076932 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f966a827-0001-4f9f-9600-072b24c50c9e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn\" (UID: \"f966a827-0001-4f9f-9600-072b24c50c9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" Jan 21 13:37:12 crc kubenswrapper[4765]: I0121 13:37:12.090127 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f966a827-0001-4f9f-9600-072b24c50c9e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn\" (UID: \"f966a827-0001-4f9f-9600-072b24c50c9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" Jan 21 13:37:12 crc kubenswrapper[4765]: I0121 13:37:12.095573 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97xjh\" (UniqueName: \"kubernetes.io/projected/f966a827-0001-4f9f-9600-072b24c50c9e-kube-api-access-97xjh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn\" (UID: \"f966a827-0001-4f9f-9600-072b24c50c9e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" Jan 21 13:37:12 crc kubenswrapper[4765]: I0121 13:37:12.145868 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" Jan 21 13:37:12 crc kubenswrapper[4765]: I0121 13:37:12.899181 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn"] Jan 21 13:37:13 crc kubenswrapper[4765]: I0121 13:37:13.733108 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" event={"ID":"f966a827-0001-4f9f-9600-072b24c50c9e","Type":"ContainerStarted","Data":"4bcfa0329887fbebbb2a33dca6ad51603fc2d6ba86fddb1e18d5a538375b1548"} Jan 21 13:37:14 crc kubenswrapper[4765]: I0121 13:37:14.446008 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:37:14 crc kubenswrapper[4765]: I0121 13:37:14.446374 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:37:14 crc kubenswrapper[4765]: I0121 13:37:14.742698 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" event={"ID":"f966a827-0001-4f9f-9600-072b24c50c9e","Type":"ContainerStarted","Data":"9a905f34a4f0a53dd2cf4473df489ceb84872efe64f808aaa7a9e87e3411523a"} Jan 21 13:37:14 crc kubenswrapper[4765]: I0121 13:37:14.768560 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" podStartSLOduration=2.507980715 podStartE2EDuration="3.768536896s" podCreationTimestamp="2026-01-21 13:37:11 +0000 UTC" firstStartedPulling="2026-01-21 13:37:12.914072955 +0000 UTC m=+2093.931798777" lastFinishedPulling="2026-01-21 13:37:14.174629136 +0000 UTC m=+2095.192354958" observedRunningTime="2026-01-21 13:37:14.758289136 +0000 UTC m=+2095.776014958" watchObservedRunningTime="2026-01-21 13:37:14.768536896 +0000 UTC m=+2095.786262728" Jan 21 13:37:19 crc kubenswrapper[4765]: I0121 13:37:19.787340 4765 generic.go:334] "Generic (PLEG): container finished" podID="f966a827-0001-4f9f-9600-072b24c50c9e" containerID="9a905f34a4f0a53dd2cf4473df489ceb84872efe64f808aaa7a9e87e3411523a" exitCode=0 Jan 21 13:37:19 crc kubenswrapper[4765]: I0121 13:37:19.787425 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" event={"ID":"f966a827-0001-4f9f-9600-072b24c50c9e","Type":"ContainerDied","Data":"9a905f34a4f0a53dd2cf4473df489ceb84872efe64f808aaa7a9e87e3411523a"} Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.237398 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.408773 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f966a827-0001-4f9f-9600-072b24c50c9e-inventory\") pod \"f966a827-0001-4f9f-9600-072b24c50c9e\" (UID: \"f966a827-0001-4f9f-9600-072b24c50c9e\") " Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.408846 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f966a827-0001-4f9f-9600-072b24c50c9e-ssh-key-openstack-edpm-ipam\") pod \"f966a827-0001-4f9f-9600-072b24c50c9e\" (UID: \"f966a827-0001-4f9f-9600-072b24c50c9e\") " Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.409104 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97xjh\" (UniqueName: \"kubernetes.io/projected/f966a827-0001-4f9f-9600-072b24c50c9e-kube-api-access-97xjh\") pod \"f966a827-0001-4f9f-9600-072b24c50c9e\" (UID: \"f966a827-0001-4f9f-9600-072b24c50c9e\") " Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.428457 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f966a827-0001-4f9f-9600-072b24c50c9e-kube-api-access-97xjh" (OuterVolumeSpecName: "kube-api-access-97xjh") pod "f966a827-0001-4f9f-9600-072b24c50c9e" (UID: "f966a827-0001-4f9f-9600-072b24c50c9e"). InnerVolumeSpecName "kube-api-access-97xjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.437442 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f966a827-0001-4f9f-9600-072b24c50c9e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f966a827-0001-4f9f-9600-072b24c50c9e" (UID: "f966a827-0001-4f9f-9600-072b24c50c9e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.447451 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f966a827-0001-4f9f-9600-072b24c50c9e-inventory" (OuterVolumeSpecName: "inventory") pod "f966a827-0001-4f9f-9600-072b24c50c9e" (UID: "f966a827-0001-4f9f-9600-072b24c50c9e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.510870 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97xjh\" (UniqueName: \"kubernetes.io/projected/f966a827-0001-4f9f-9600-072b24c50c9e-kube-api-access-97xjh\") on node \"crc\" DevicePath \"\"" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.510916 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f966a827-0001-4f9f-9600-072b24c50c9e-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.510930 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f966a827-0001-4f9f-9600-072b24c50c9e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.803852 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" event={"ID":"f966a827-0001-4f9f-9600-072b24c50c9e","Type":"ContainerDied","Data":"4bcfa0329887fbebbb2a33dca6ad51603fc2d6ba86fddb1e18d5a538375b1548"} Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.803896 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bcfa0329887fbebbb2a33dca6ad51603fc2d6ba86fddb1e18d5a538375b1548" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.803946 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.897234 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk"] Jan 21 13:37:21 crc kubenswrapper[4765]: E0121 13:37:21.897795 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f966a827-0001-4f9f-9600-072b24c50c9e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.897820 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f966a827-0001-4f9f-9600-072b24c50c9e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.898051 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f966a827-0001-4f9f-9600-072b24c50c9e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.898919 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.907691 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.907893 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.908019 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.908175 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.910494 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk"] Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.920898 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d143acd1-ab20-495a-ba80-139132d247e2-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4p9nk\" (UID: \"d143acd1-ab20-495a-ba80-139132d247e2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.920963 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d143acd1-ab20-495a-ba80-139132d247e2-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4p9nk\" (UID: \"d143acd1-ab20-495a-ba80-139132d247e2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" Jan 21 13:37:21 crc kubenswrapper[4765]: I0121 13:37:21.920985 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2kp9\" (UniqueName: \"kubernetes.io/projected/d143acd1-ab20-495a-ba80-139132d247e2-kube-api-access-j2kp9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4p9nk\" (UID: \"d143acd1-ab20-495a-ba80-139132d247e2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" Jan 21 13:37:22 crc kubenswrapper[4765]: I0121 13:37:22.022632 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d143acd1-ab20-495a-ba80-139132d247e2-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4p9nk\" (UID: \"d143acd1-ab20-495a-ba80-139132d247e2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" Jan 21 13:37:22 crc kubenswrapper[4765]: I0121 13:37:22.022688 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2kp9\" (UniqueName: \"kubernetes.io/projected/d143acd1-ab20-495a-ba80-139132d247e2-kube-api-access-j2kp9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4p9nk\" (UID: \"d143acd1-ab20-495a-ba80-139132d247e2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" Jan 21 13:37:22 crc kubenswrapper[4765]: I0121 13:37:22.022848 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d143acd1-ab20-495a-ba80-139132d247e2-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4p9nk\" (UID: \"d143acd1-ab20-495a-ba80-139132d247e2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" Jan 21 13:37:22 crc kubenswrapper[4765]: I0121 13:37:22.027966 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d143acd1-ab20-495a-ba80-139132d247e2-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4p9nk\" (UID: \"d143acd1-ab20-495a-ba80-139132d247e2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" Jan 21 13:37:22 crc kubenswrapper[4765]: I0121 13:37:22.030775 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d143acd1-ab20-495a-ba80-139132d247e2-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4p9nk\" (UID: \"d143acd1-ab20-495a-ba80-139132d247e2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" Jan 21 13:37:22 crc kubenswrapper[4765]: I0121 13:37:22.039000 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2kp9\" (UniqueName: \"kubernetes.io/projected/d143acd1-ab20-495a-ba80-139132d247e2-kube-api-access-j2kp9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4p9nk\" (UID: \"d143acd1-ab20-495a-ba80-139132d247e2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" Jan 21 13:37:22 crc kubenswrapper[4765]: I0121 13:37:22.225796 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" Jan 21 13:37:23 crc kubenswrapper[4765]: I0121 13:37:23.032982 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk"] Jan 21 13:37:23 crc kubenswrapper[4765]: I0121 13:37:23.824060 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" event={"ID":"d143acd1-ab20-495a-ba80-139132d247e2","Type":"ContainerStarted","Data":"c93b37faa32ef883feec59ffde0eb14b007784845deba2ebbf6bb5f762e02e15"} Jan 21 13:37:23 crc kubenswrapper[4765]: I0121 13:37:23.824628 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" event={"ID":"d143acd1-ab20-495a-ba80-139132d247e2","Type":"ContainerStarted","Data":"5bbdd0019411c7d9e736c7df6a9ee32bb4c3df89b270300782b1cdbb8799e500"} Jan 21 13:37:23 crc kubenswrapper[4765]: I0121 13:37:23.855599 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" podStartSLOduration=2.418253086 podStartE2EDuration="2.8555501s" podCreationTimestamp="2026-01-21 13:37:21 +0000 UTC" firstStartedPulling="2026-01-21 13:37:23.052974081 +0000 UTC m=+2104.070699923" lastFinishedPulling="2026-01-21 13:37:23.490271105 +0000 UTC m=+2104.507996937" observedRunningTime="2026-01-21 13:37:23.848361016 +0000 UTC m=+2104.866086838" watchObservedRunningTime="2026-01-21 13:37:23.8555501 +0000 UTC m=+2104.873275962" Jan 21 13:37:33 crc kubenswrapper[4765]: I0121 13:37:33.345759 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5nxn9"] Jan 21 13:37:33 crc kubenswrapper[4765]: I0121 13:37:33.348139 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:33 crc kubenswrapper[4765]: I0121 13:37:33.417165 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5nxn9"] Jan 21 13:37:33 crc kubenswrapper[4765]: I0121 13:37:33.535560 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be1b69c-0806-4d8c-afa9-20b68dec5c22-utilities\") pod \"redhat-operators-5nxn9\" (UID: \"9be1b69c-0806-4d8c-afa9-20b68dec5c22\") " pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:33 crc kubenswrapper[4765]: I0121 13:37:33.535673 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4cd6\" (UniqueName: \"kubernetes.io/projected/9be1b69c-0806-4d8c-afa9-20b68dec5c22-kube-api-access-q4cd6\") pod \"redhat-operators-5nxn9\" (UID: \"9be1b69c-0806-4d8c-afa9-20b68dec5c22\") " pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:33 crc kubenswrapper[4765]: I0121 13:37:33.535749 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be1b69c-0806-4d8c-afa9-20b68dec5c22-catalog-content\") pod \"redhat-operators-5nxn9\" (UID: \"9be1b69c-0806-4d8c-afa9-20b68dec5c22\") " pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:33 crc kubenswrapper[4765]: I0121 13:37:33.641745 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4cd6\" (UniqueName: \"kubernetes.io/projected/9be1b69c-0806-4d8c-afa9-20b68dec5c22-kube-api-access-q4cd6\") pod \"redhat-operators-5nxn9\" (UID: \"9be1b69c-0806-4d8c-afa9-20b68dec5c22\") " pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:33 crc kubenswrapper[4765]: I0121 13:37:33.643665 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be1b69c-0806-4d8c-afa9-20b68dec5c22-catalog-content\") pod \"redhat-operators-5nxn9\" (UID: \"9be1b69c-0806-4d8c-afa9-20b68dec5c22\") " pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:33 crc kubenswrapper[4765]: I0121 13:37:33.643854 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be1b69c-0806-4d8c-afa9-20b68dec5c22-utilities\") pod \"redhat-operators-5nxn9\" (UID: \"9be1b69c-0806-4d8c-afa9-20b68dec5c22\") " pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:33 crc kubenswrapper[4765]: I0121 13:37:33.644877 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be1b69c-0806-4d8c-afa9-20b68dec5c22-catalog-content\") pod \"redhat-operators-5nxn9\" (UID: \"9be1b69c-0806-4d8c-afa9-20b68dec5c22\") " pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:33 crc kubenswrapper[4765]: I0121 13:37:33.644931 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be1b69c-0806-4d8c-afa9-20b68dec5c22-utilities\") pod \"redhat-operators-5nxn9\" (UID: \"9be1b69c-0806-4d8c-afa9-20b68dec5c22\") " pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:33 crc kubenswrapper[4765]: I0121 13:37:33.677576 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4cd6\" (UniqueName: \"kubernetes.io/projected/9be1b69c-0806-4d8c-afa9-20b68dec5c22-kube-api-access-q4cd6\") pod \"redhat-operators-5nxn9\" (UID: \"9be1b69c-0806-4d8c-afa9-20b68dec5c22\") " pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:33 crc kubenswrapper[4765]: I0121 13:37:33.966609 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:34 crc kubenswrapper[4765]: I0121 13:37:34.535422 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5nxn9"] Jan 21 13:37:34 crc kubenswrapper[4765]: I0121 13:37:34.913968 4765 generic.go:334] "Generic (PLEG): container finished" podID="9be1b69c-0806-4d8c-afa9-20b68dec5c22" containerID="48b361cb11e6d5b87f6c6fd4d01a30de70989cad8e2653fa1c80140df72a8699" exitCode=0 Jan 21 13:37:34 crc kubenswrapper[4765]: I0121 13:37:34.914071 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nxn9" event={"ID":"9be1b69c-0806-4d8c-afa9-20b68dec5c22","Type":"ContainerDied","Data":"48b361cb11e6d5b87f6c6fd4d01a30de70989cad8e2653fa1c80140df72a8699"} Jan 21 13:37:34 crc kubenswrapper[4765]: I0121 13:37:34.914313 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nxn9" event={"ID":"9be1b69c-0806-4d8c-afa9-20b68dec5c22","Type":"ContainerStarted","Data":"910e945ba506f0b468396f30046ef3d33d544478654d35548b1ad6f97db9c19b"} Jan 21 13:37:36 crc kubenswrapper[4765]: I0121 13:37:36.997024 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nxn9" event={"ID":"9be1b69c-0806-4d8c-afa9-20b68dec5c22","Type":"ContainerStarted","Data":"4b0601baddba8c7a94835f545e145b561fbe1444c6ec17f074d821917bc9a802"} Jan 21 13:37:39 crc kubenswrapper[4765]: I0121 13:37:39.030049 4765 generic.go:334] "Generic (PLEG): container finished" podID="9be1b69c-0806-4d8c-afa9-20b68dec5c22" containerID="4b0601baddba8c7a94835f545e145b561fbe1444c6ec17f074d821917bc9a802" exitCode=0 Jan 21 13:37:39 crc kubenswrapper[4765]: I0121 13:37:39.030128 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nxn9" event={"ID":"9be1b69c-0806-4d8c-afa9-20b68dec5c22","Type":"ContainerDied","Data":"4b0601baddba8c7a94835f545e145b561fbe1444c6ec17f074d821917bc9a802"} Jan 21 13:37:40 crc kubenswrapper[4765]: I0121 13:37:40.045003 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nxn9" event={"ID":"9be1b69c-0806-4d8c-afa9-20b68dec5c22","Type":"ContainerStarted","Data":"dd919abf8279c5d4674b819c1fe1b5b9663ed54fa584b09133a81e0e555aa78d"} Jan 21 13:37:40 crc kubenswrapper[4765]: I0121 13:37:40.086764 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5nxn9" podStartSLOduration=2.508243457 podStartE2EDuration="7.086745416s" podCreationTimestamp="2026-01-21 13:37:33 +0000 UTC" firstStartedPulling="2026-01-21 13:37:34.91551559 +0000 UTC m=+2115.933241412" lastFinishedPulling="2026-01-21 13:37:39.494017529 +0000 UTC m=+2120.511743371" observedRunningTime="2026-01-21 13:37:40.08089197 +0000 UTC m=+2121.098617802" watchObservedRunningTime="2026-01-21 13:37:40.086745416 +0000 UTC m=+2121.104471238" Jan 21 13:37:43 crc kubenswrapper[4765]: I0121 13:37:43.966828 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:43 crc kubenswrapper[4765]: I0121 13:37:43.967223 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:44 crc kubenswrapper[4765]: I0121 13:37:44.445725 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:37:44 crc kubenswrapper[4765]: I0121 13:37:44.445791 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:37:45 crc kubenswrapper[4765]: I0121 13:37:45.037093 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5nxn9" podUID="9be1b69c-0806-4d8c-afa9-20b68dec5c22" containerName="registry-server" probeResult="failure" output=< Jan 21 13:37:45 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Jan 21 13:37:45 crc kubenswrapper[4765]: > Jan 21 13:37:54 crc kubenswrapper[4765]: I0121 13:37:54.017057 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:54 crc kubenswrapper[4765]: I0121 13:37:54.088751 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:54 crc kubenswrapper[4765]: I0121 13:37:54.259993 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5nxn9"] Jan 21 13:37:55 crc kubenswrapper[4765]: I0121 13:37:55.186804 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5nxn9" podUID="9be1b69c-0806-4d8c-afa9-20b68dec5c22" containerName="registry-server" containerID="cri-o://dd919abf8279c5d4674b819c1fe1b5b9663ed54fa584b09133a81e0e555aa78d" gracePeriod=2 Jan 21 13:37:55 crc kubenswrapper[4765]: I0121 13:37:55.629078 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:55 crc kubenswrapper[4765]: I0121 13:37:55.807293 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be1b69c-0806-4d8c-afa9-20b68dec5c22-catalog-content\") pod \"9be1b69c-0806-4d8c-afa9-20b68dec5c22\" (UID: \"9be1b69c-0806-4d8c-afa9-20b68dec5c22\") " Jan 21 13:37:55 crc kubenswrapper[4765]: I0121 13:37:55.807357 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4cd6\" (UniqueName: \"kubernetes.io/projected/9be1b69c-0806-4d8c-afa9-20b68dec5c22-kube-api-access-q4cd6\") pod \"9be1b69c-0806-4d8c-afa9-20b68dec5c22\" (UID: \"9be1b69c-0806-4d8c-afa9-20b68dec5c22\") " Jan 21 13:37:55 crc kubenswrapper[4765]: I0121 13:37:55.807457 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be1b69c-0806-4d8c-afa9-20b68dec5c22-utilities\") pod \"9be1b69c-0806-4d8c-afa9-20b68dec5c22\" (UID: \"9be1b69c-0806-4d8c-afa9-20b68dec5c22\") " Jan 21 13:37:55 crc kubenswrapper[4765]: I0121 13:37:55.808326 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9be1b69c-0806-4d8c-afa9-20b68dec5c22-utilities" (OuterVolumeSpecName: "utilities") pod "9be1b69c-0806-4d8c-afa9-20b68dec5c22" (UID: "9be1b69c-0806-4d8c-afa9-20b68dec5c22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:37:55 crc kubenswrapper[4765]: I0121 13:37:55.817119 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be1b69c-0806-4d8c-afa9-20b68dec5c22-kube-api-access-q4cd6" (OuterVolumeSpecName: "kube-api-access-q4cd6") pod "9be1b69c-0806-4d8c-afa9-20b68dec5c22" (UID: "9be1b69c-0806-4d8c-afa9-20b68dec5c22"). InnerVolumeSpecName "kube-api-access-q4cd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:37:55 crc kubenswrapper[4765]: I0121 13:37:55.910305 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4cd6\" (UniqueName: \"kubernetes.io/projected/9be1b69c-0806-4d8c-afa9-20b68dec5c22-kube-api-access-q4cd6\") on node \"crc\" DevicePath \"\"" Jan 21 13:37:55 crc kubenswrapper[4765]: I0121 13:37:55.910332 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be1b69c-0806-4d8c-afa9-20b68dec5c22-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:37:55 crc kubenswrapper[4765]: I0121 13:37:55.958550 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9be1b69c-0806-4d8c-afa9-20b68dec5c22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9be1b69c-0806-4d8c-afa9-20b68dec5c22" (UID: "9be1b69c-0806-4d8c-afa9-20b68dec5c22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.012471 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be1b69c-0806-4d8c-afa9-20b68dec5c22-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.197915 4765 generic.go:334] "Generic (PLEG): container finished" podID="9be1b69c-0806-4d8c-afa9-20b68dec5c22" containerID="dd919abf8279c5d4674b819c1fe1b5b9663ed54fa584b09133a81e0e555aa78d" exitCode=0 Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.197973 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nxn9" event={"ID":"9be1b69c-0806-4d8c-afa9-20b68dec5c22","Type":"ContainerDied","Data":"dd919abf8279c5d4674b819c1fe1b5b9663ed54fa584b09133a81e0e555aa78d"} Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.197985 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nxn9" Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.198006 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nxn9" event={"ID":"9be1b69c-0806-4d8c-afa9-20b68dec5c22","Type":"ContainerDied","Data":"910e945ba506f0b468396f30046ef3d33d544478654d35548b1ad6f97db9c19b"} Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.198023 4765 scope.go:117] "RemoveContainer" containerID="dd919abf8279c5d4674b819c1fe1b5b9663ed54fa584b09133a81e0e555aa78d" Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.217419 4765 scope.go:117] "RemoveContainer" containerID="4b0601baddba8c7a94835f545e145b561fbe1444c6ec17f074d821917bc9a802" Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.250048 4765 scope.go:117] "RemoveContainer" containerID="48b361cb11e6d5b87f6c6fd4d01a30de70989cad8e2653fa1c80140df72a8699" Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.256324 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5nxn9"] Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.268250 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5nxn9"] Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.294402 4765 scope.go:117] "RemoveContainer" containerID="dd919abf8279c5d4674b819c1fe1b5b9663ed54fa584b09133a81e0e555aa78d" Jan 21 13:37:56 crc kubenswrapper[4765]: E0121 13:37:56.294972 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd919abf8279c5d4674b819c1fe1b5b9663ed54fa584b09133a81e0e555aa78d\": container with ID starting with dd919abf8279c5d4674b819c1fe1b5b9663ed54fa584b09133a81e0e555aa78d not found: ID does not exist" containerID="dd919abf8279c5d4674b819c1fe1b5b9663ed54fa584b09133a81e0e555aa78d" Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.295013 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd919abf8279c5d4674b819c1fe1b5b9663ed54fa584b09133a81e0e555aa78d"} err="failed to get container status \"dd919abf8279c5d4674b819c1fe1b5b9663ed54fa584b09133a81e0e555aa78d\": rpc error: code = NotFound desc = could not find container \"dd919abf8279c5d4674b819c1fe1b5b9663ed54fa584b09133a81e0e555aa78d\": container with ID starting with dd919abf8279c5d4674b819c1fe1b5b9663ed54fa584b09133a81e0e555aa78d not found: ID does not exist" Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.295046 4765 scope.go:117] "RemoveContainer" containerID="4b0601baddba8c7a94835f545e145b561fbe1444c6ec17f074d821917bc9a802" Jan 21 13:37:56 crc kubenswrapper[4765]: E0121 13:37:56.295480 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b0601baddba8c7a94835f545e145b561fbe1444c6ec17f074d821917bc9a802\": container with ID starting with 4b0601baddba8c7a94835f545e145b561fbe1444c6ec17f074d821917bc9a802 not found: ID does not exist" containerID="4b0601baddba8c7a94835f545e145b561fbe1444c6ec17f074d821917bc9a802" Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.295514 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b0601baddba8c7a94835f545e145b561fbe1444c6ec17f074d821917bc9a802"} err="failed to get container status \"4b0601baddba8c7a94835f545e145b561fbe1444c6ec17f074d821917bc9a802\": rpc error: code = NotFound desc = could not find container \"4b0601baddba8c7a94835f545e145b561fbe1444c6ec17f074d821917bc9a802\": container with ID starting with 4b0601baddba8c7a94835f545e145b561fbe1444c6ec17f074d821917bc9a802 not found: ID does not exist" Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.295532 4765 scope.go:117] "RemoveContainer" containerID="48b361cb11e6d5b87f6c6fd4d01a30de70989cad8e2653fa1c80140df72a8699" Jan 21 13:37:56 crc kubenswrapper[4765]: E0121 13:37:56.295795 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b361cb11e6d5b87f6c6fd4d01a30de70989cad8e2653fa1c80140df72a8699\": container with ID starting with 48b361cb11e6d5b87f6c6fd4d01a30de70989cad8e2653fa1c80140df72a8699 not found: ID does not exist" containerID="48b361cb11e6d5b87f6c6fd4d01a30de70989cad8e2653fa1c80140df72a8699" Jan 21 13:37:56 crc kubenswrapper[4765]: I0121 13:37:56.295824 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b361cb11e6d5b87f6c6fd4d01a30de70989cad8e2653fa1c80140df72a8699"} err="failed to get container status \"48b361cb11e6d5b87f6c6fd4d01a30de70989cad8e2653fa1c80140df72a8699\": rpc error: code = NotFound desc = could not find container \"48b361cb11e6d5b87f6c6fd4d01a30de70989cad8e2653fa1c80140df72a8699\": container with ID starting with 48b361cb11e6d5b87f6c6fd4d01a30de70989cad8e2653fa1c80140df72a8699 not found: ID does not exist" Jan 21 13:37:57 crc kubenswrapper[4765]: I0121 13:37:57.630877 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be1b69c-0806-4d8c-afa9-20b68dec5c22" path="/var/lib/kubelet/pods/9be1b69c-0806-4d8c-afa9-20b68dec5c22/volumes" Jan 21 13:38:08 crc kubenswrapper[4765]: I0121 13:38:08.324590 4765 generic.go:334] "Generic (PLEG): container finished" podID="d143acd1-ab20-495a-ba80-139132d247e2" containerID="c93b37faa32ef883feec59ffde0eb14b007784845deba2ebbf6bb5f762e02e15" exitCode=0 Jan 21 13:38:08 crc kubenswrapper[4765]: I0121 13:38:08.324718 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" event={"ID":"d143acd1-ab20-495a-ba80-139132d247e2","Type":"ContainerDied","Data":"c93b37faa32ef883feec59ffde0eb14b007784845deba2ebbf6bb5f762e02e15"} Jan 21 13:38:09 crc kubenswrapper[4765]: I0121 13:38:09.714945 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" Jan 21 13:38:09 crc kubenswrapper[4765]: I0121 13:38:09.894969 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2kp9\" (UniqueName: \"kubernetes.io/projected/d143acd1-ab20-495a-ba80-139132d247e2-kube-api-access-j2kp9\") pod \"d143acd1-ab20-495a-ba80-139132d247e2\" (UID: \"d143acd1-ab20-495a-ba80-139132d247e2\") " Jan 21 13:38:09 crc kubenswrapper[4765]: I0121 13:38:09.895582 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d143acd1-ab20-495a-ba80-139132d247e2-ssh-key-openstack-edpm-ipam\") pod \"d143acd1-ab20-495a-ba80-139132d247e2\" (UID: \"d143acd1-ab20-495a-ba80-139132d247e2\") " Jan 21 13:38:09 crc kubenswrapper[4765]: I0121 13:38:09.897032 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d143acd1-ab20-495a-ba80-139132d247e2-inventory\") pod \"d143acd1-ab20-495a-ba80-139132d247e2\" (UID: \"d143acd1-ab20-495a-ba80-139132d247e2\") " Jan 21 13:38:09 crc kubenswrapper[4765]: I0121 13:38:09.900380 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d143acd1-ab20-495a-ba80-139132d247e2-kube-api-access-j2kp9" (OuterVolumeSpecName: "kube-api-access-j2kp9") pod "d143acd1-ab20-495a-ba80-139132d247e2" (UID: "d143acd1-ab20-495a-ba80-139132d247e2"). InnerVolumeSpecName "kube-api-access-j2kp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:38:09 crc kubenswrapper[4765]: I0121 13:38:09.922689 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d143acd1-ab20-495a-ba80-139132d247e2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d143acd1-ab20-495a-ba80-139132d247e2" (UID: "d143acd1-ab20-495a-ba80-139132d247e2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:38:09 crc kubenswrapper[4765]: I0121 13:38:09.926819 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d143acd1-ab20-495a-ba80-139132d247e2-inventory" (OuterVolumeSpecName: "inventory") pod "d143acd1-ab20-495a-ba80-139132d247e2" (UID: "d143acd1-ab20-495a-ba80-139132d247e2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.001032 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2kp9\" (UniqueName: \"kubernetes.io/projected/d143acd1-ab20-495a-ba80-139132d247e2-kube-api-access-j2kp9\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.001069 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d143acd1-ab20-495a-ba80-139132d247e2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.001082 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d143acd1-ab20-495a-ba80-139132d247e2-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.342401 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" event={"ID":"d143acd1-ab20-495a-ba80-139132d247e2","Type":"ContainerDied","Data":"5bbdd0019411c7d9e736c7df6a9ee32bb4c3df89b270300782b1cdbb8799e500"} Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.342442 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bbdd0019411c7d9e736c7df6a9ee32bb4c3df89b270300782b1cdbb8799e500" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.342468 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4p9nk" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.449347 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72"] Jan 21 13:38:10 crc kubenswrapper[4765]: E0121 13:38:10.449733 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be1b69c-0806-4d8c-afa9-20b68dec5c22" containerName="extract-utilities" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.449749 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be1b69c-0806-4d8c-afa9-20b68dec5c22" containerName="extract-utilities" Jan 21 13:38:10 crc kubenswrapper[4765]: E0121 13:38:10.449763 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be1b69c-0806-4d8c-afa9-20b68dec5c22" containerName="extract-content" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.449770 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be1b69c-0806-4d8c-afa9-20b68dec5c22" containerName="extract-content" Jan 21 13:38:10 crc kubenswrapper[4765]: E0121 13:38:10.449786 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be1b69c-0806-4d8c-afa9-20b68dec5c22" containerName="registry-server" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.449794 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be1b69c-0806-4d8c-afa9-20b68dec5c22" containerName="registry-server" Jan 21 13:38:10 crc kubenswrapper[4765]: E0121 13:38:10.449809 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d143acd1-ab20-495a-ba80-139132d247e2" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.449816 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="d143acd1-ab20-495a-ba80-139132d247e2" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.449998 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="d143acd1-ab20-495a-ba80-139132d247e2" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.450030 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be1b69c-0806-4d8c-afa9-20b68dec5c22" containerName="registry-server" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.450777 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.452777 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.452998 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.456224 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.459281 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.466046 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72"] Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.612195 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b30a7ddd-acca-4134-8807-675f980b4a4b-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vhz72\" (UID: \"b30a7ddd-acca-4134-8807-675f980b4a4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.612276 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9qrf\" (UniqueName: \"kubernetes.io/projected/b30a7ddd-acca-4134-8807-675f980b4a4b-kube-api-access-z9qrf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vhz72\" (UID: \"b30a7ddd-acca-4134-8807-675f980b4a4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.612321 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b30a7ddd-acca-4134-8807-675f980b4a4b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vhz72\" (UID: \"b30a7ddd-acca-4134-8807-675f980b4a4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.713725 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b30a7ddd-acca-4134-8807-675f980b4a4b-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vhz72\" (UID: \"b30a7ddd-acca-4134-8807-675f980b4a4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.714623 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9qrf\" (UniqueName: \"kubernetes.io/projected/b30a7ddd-acca-4134-8807-675f980b4a4b-kube-api-access-z9qrf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vhz72\" (UID: \"b30a7ddd-acca-4134-8807-675f980b4a4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.714675 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b30a7ddd-acca-4134-8807-675f980b4a4b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vhz72\" (UID: \"b30a7ddd-acca-4134-8807-675f980b4a4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.719477 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b30a7ddd-acca-4134-8807-675f980b4a4b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vhz72\" (UID: \"b30a7ddd-acca-4134-8807-675f980b4a4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.732201 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b30a7ddd-acca-4134-8807-675f980b4a4b-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vhz72\" (UID: \"b30a7ddd-acca-4134-8807-675f980b4a4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.736782 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9qrf\" (UniqueName: \"kubernetes.io/projected/b30a7ddd-acca-4134-8807-675f980b4a4b-kube-api-access-z9qrf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vhz72\" (UID: \"b30a7ddd-acca-4134-8807-675f980b4a4b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" Jan 21 13:38:10 crc kubenswrapper[4765]: I0121 13:38:10.766838 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" Jan 21 13:38:11 crc kubenswrapper[4765]: I0121 13:38:11.358335 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72"] Jan 21 13:38:12 crc kubenswrapper[4765]: I0121 13:38:12.365303 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" event={"ID":"b30a7ddd-acca-4134-8807-675f980b4a4b","Type":"ContainerStarted","Data":"8d59164b3b6876421b446bcaa563131c7e1f08d814ee59d6f02a7fd3eeeb161c"} Jan 21 13:38:12 crc kubenswrapper[4765]: I0121 13:38:12.365619 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" event={"ID":"b30a7ddd-acca-4134-8807-675f980b4a4b","Type":"ContainerStarted","Data":"4bca30287d323b136a023c49c9c14764d1593081dc9f7e29eb4c9b758acbc996"} Jan 21 13:38:12 crc kubenswrapper[4765]: I0121 13:38:12.388644 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" podStartSLOduration=1.970778948 podStartE2EDuration="2.388623074s" podCreationTimestamp="2026-01-21 13:38:10 +0000 UTC" firstStartedPulling="2026-01-21 13:38:11.372786388 +0000 UTC m=+2152.390512220" lastFinishedPulling="2026-01-21 13:38:11.790630514 +0000 UTC m=+2152.808356346" observedRunningTime="2026-01-21 13:38:12.384040081 +0000 UTC m=+2153.401765903" watchObservedRunningTime="2026-01-21 13:38:12.388623074 +0000 UTC m=+2153.406348896" Jan 21 13:38:14 crc kubenswrapper[4765]: I0121 13:38:14.446699 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:38:14 crc kubenswrapper[4765]: I0121 13:38:14.447134 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:38:14 crc kubenswrapper[4765]: I0121 13:38:14.447258 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:38:14 crc kubenswrapper[4765]: I0121 13:38:14.448603 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8be9c6b30eac9194fe69597ddad7819ab0f25189067a0149bf0d2a68338af1f4"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:38:14 crc kubenswrapper[4765]: I0121 13:38:14.448732 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://8be9c6b30eac9194fe69597ddad7819ab0f25189067a0149bf0d2a68338af1f4" gracePeriod=600 Jan 21 13:38:15 crc kubenswrapper[4765]: I0121 13:38:15.399188 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="8be9c6b30eac9194fe69597ddad7819ab0f25189067a0149bf0d2a68338af1f4" exitCode=0 Jan 21 13:38:15 crc kubenswrapper[4765]: I0121 13:38:15.399266 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"8be9c6b30eac9194fe69597ddad7819ab0f25189067a0149bf0d2a68338af1f4"} Jan 21 13:38:15 crc kubenswrapper[4765]: I0121 13:38:15.399871 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0"} Jan 21 13:38:15 crc kubenswrapper[4765]: I0121 13:38:15.399913 4765 scope.go:117] "RemoveContainer" containerID="7c6c34a9f13505155d9f617747ffc547b02681a497bf20bcd0f29e3621366ea0" Jan 21 13:39:08 crc kubenswrapper[4765]: I0121 13:39:08.273993 4765 generic.go:334] "Generic (PLEG): container finished" podID="b30a7ddd-acca-4134-8807-675f980b4a4b" containerID="8d59164b3b6876421b446bcaa563131c7e1f08d814ee59d6f02a7fd3eeeb161c" exitCode=0 Jan 21 13:39:08 crc kubenswrapper[4765]: I0121 13:39:08.274091 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" event={"ID":"b30a7ddd-acca-4134-8807-675f980b4a4b","Type":"ContainerDied","Data":"8d59164b3b6876421b446bcaa563131c7e1f08d814ee59d6f02a7fd3eeeb161c"} Jan 21 13:39:09 crc kubenswrapper[4765]: I0121 13:39:09.704780 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" Jan 21 13:39:09 crc kubenswrapper[4765]: I0121 13:39:09.809596 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9qrf\" (UniqueName: \"kubernetes.io/projected/b30a7ddd-acca-4134-8807-675f980b4a4b-kube-api-access-z9qrf\") pod \"b30a7ddd-acca-4134-8807-675f980b4a4b\" (UID: \"b30a7ddd-acca-4134-8807-675f980b4a4b\") " Jan 21 13:39:09 crc kubenswrapper[4765]: I0121 13:39:09.809645 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b30a7ddd-acca-4134-8807-675f980b4a4b-inventory\") pod \"b30a7ddd-acca-4134-8807-675f980b4a4b\" (UID: \"b30a7ddd-acca-4134-8807-675f980b4a4b\") " Jan 21 13:39:09 crc kubenswrapper[4765]: I0121 13:39:09.809737 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b30a7ddd-acca-4134-8807-675f980b4a4b-ssh-key-openstack-edpm-ipam\") pod \"b30a7ddd-acca-4134-8807-675f980b4a4b\" (UID: \"b30a7ddd-acca-4134-8807-675f980b4a4b\") " Jan 21 13:39:09 crc kubenswrapper[4765]: I0121 13:39:09.815342 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b30a7ddd-acca-4134-8807-675f980b4a4b-kube-api-access-z9qrf" (OuterVolumeSpecName: "kube-api-access-z9qrf") pod "b30a7ddd-acca-4134-8807-675f980b4a4b" (UID: "b30a7ddd-acca-4134-8807-675f980b4a4b"). InnerVolumeSpecName "kube-api-access-z9qrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:39:09 crc kubenswrapper[4765]: I0121 13:39:09.840327 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b30a7ddd-acca-4134-8807-675f980b4a4b-inventory" (OuterVolumeSpecName: "inventory") pod "b30a7ddd-acca-4134-8807-675f980b4a4b" (UID: "b30a7ddd-acca-4134-8807-675f980b4a4b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:39:09 crc kubenswrapper[4765]: I0121 13:39:09.846831 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b30a7ddd-acca-4134-8807-675f980b4a4b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b30a7ddd-acca-4134-8807-675f980b4a4b" (UID: "b30a7ddd-acca-4134-8807-675f980b4a4b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:39:09 crc kubenswrapper[4765]: I0121 13:39:09.912439 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9qrf\" (UniqueName: \"kubernetes.io/projected/b30a7ddd-acca-4134-8807-675f980b4a4b-kube-api-access-z9qrf\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:09 crc kubenswrapper[4765]: I0121 13:39:09.912476 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b30a7ddd-acca-4134-8807-675f980b4a4b-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:09 crc kubenswrapper[4765]: I0121 13:39:09.912491 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b30a7ddd-acca-4134-8807-675f980b4a4b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.292901 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" event={"ID":"b30a7ddd-acca-4134-8807-675f980b4a4b","Type":"ContainerDied","Data":"4bca30287d323b136a023c49c9c14764d1593081dc9f7e29eb4c9b758acbc996"} Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.293296 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bca30287d323b136a023c49c9c14764d1593081dc9f7e29eb4c9b758acbc996" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.292966 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vhz72" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.465880 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-g6wfz"] Jan 21 13:39:10 crc kubenswrapper[4765]: E0121 13:39:10.466408 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b30a7ddd-acca-4134-8807-675f980b4a4b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.466427 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b30a7ddd-acca-4134-8807-675f980b4a4b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.466631 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b30a7ddd-acca-4134-8807-675f980b4a4b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.467229 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.470666 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.470904 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.471059 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.475340 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.483762 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-g6wfz"] Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.628671 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c992s\" (UniqueName: \"kubernetes.io/projected/8ea0edfd-ace0-474e-b868-7ad5bed77cab-kube-api-access-c992s\") pod \"ssh-known-hosts-edpm-deployment-g6wfz\" (UID: \"8ea0edfd-ace0-474e-b868-7ad5bed77cab\") " pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.628831 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ea0edfd-ace0-474e-b868-7ad5bed77cab-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-g6wfz\" (UID: \"8ea0edfd-ace0-474e-b868-7ad5bed77cab\") " pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.628956 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8ea0edfd-ace0-474e-b868-7ad5bed77cab-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-g6wfz\" (UID: \"8ea0edfd-ace0-474e-b868-7ad5bed77cab\") " pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.730400 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c992s\" (UniqueName: \"kubernetes.io/projected/8ea0edfd-ace0-474e-b868-7ad5bed77cab-kube-api-access-c992s\") pod \"ssh-known-hosts-edpm-deployment-g6wfz\" (UID: \"8ea0edfd-ace0-474e-b868-7ad5bed77cab\") " pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.730516 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ea0edfd-ace0-474e-b868-7ad5bed77cab-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-g6wfz\" (UID: \"8ea0edfd-ace0-474e-b868-7ad5bed77cab\") " pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.730579 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8ea0edfd-ace0-474e-b868-7ad5bed77cab-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-g6wfz\" (UID: \"8ea0edfd-ace0-474e-b868-7ad5bed77cab\") " pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.735113 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ea0edfd-ace0-474e-b868-7ad5bed77cab-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-g6wfz\" (UID: \"8ea0edfd-ace0-474e-b868-7ad5bed77cab\") " pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.735376 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8ea0edfd-ace0-474e-b868-7ad5bed77cab-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-g6wfz\" (UID: \"8ea0edfd-ace0-474e-b868-7ad5bed77cab\") " pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.756360 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c992s\" (UniqueName: \"kubernetes.io/projected/8ea0edfd-ace0-474e-b868-7ad5bed77cab-kube-api-access-c992s\") pod \"ssh-known-hosts-edpm-deployment-g6wfz\" (UID: \"8ea0edfd-ace0-474e-b868-7ad5bed77cab\") " pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" Jan 21 13:39:10 crc kubenswrapper[4765]: I0121 13:39:10.794676 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" Jan 21 13:39:11 crc kubenswrapper[4765]: I0121 13:39:11.360415 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-g6wfz"] Jan 21 13:39:12 crc kubenswrapper[4765]: I0121 13:39:12.310614 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" event={"ID":"8ea0edfd-ace0-474e-b868-7ad5bed77cab","Type":"ContainerStarted","Data":"4097305263e14c37f58610a5fe9d3131b87d6686a7a42702025775bb91d99181"} Jan 21 13:39:12 crc kubenswrapper[4765]: I0121 13:39:12.311241 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" event={"ID":"8ea0edfd-ace0-474e-b868-7ad5bed77cab","Type":"ContainerStarted","Data":"301291aff50e95b65d237eb1b728f90b2dd037ae0d46e4f22733bcae90707f7a"} Jan 21 13:39:12 crc kubenswrapper[4765]: I0121 13:39:12.335685 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" podStartSLOduration=1.894438377 podStartE2EDuration="2.335668111s" podCreationTimestamp="2026-01-21 13:39:10 +0000 UTC" firstStartedPulling="2026-01-21 13:39:11.381875304 +0000 UTC m=+2212.399601127" lastFinishedPulling="2026-01-21 13:39:11.823105039 +0000 UTC m=+2212.840830861" observedRunningTime="2026-01-21 13:39:12.333283712 +0000 UTC m=+2213.351009534" watchObservedRunningTime="2026-01-21 13:39:12.335668111 +0000 UTC m=+2213.353393923" Jan 21 13:39:19 crc kubenswrapper[4765]: I0121 13:39:19.366990 4765 generic.go:334] "Generic (PLEG): container finished" podID="8ea0edfd-ace0-474e-b868-7ad5bed77cab" containerID="4097305263e14c37f58610a5fe9d3131b87d6686a7a42702025775bb91d99181" exitCode=0 Jan 21 13:39:19 crc kubenswrapper[4765]: I0121 13:39:19.367120 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" event={"ID":"8ea0edfd-ace0-474e-b868-7ad5bed77cab","Type":"ContainerDied","Data":"4097305263e14c37f58610a5fe9d3131b87d6686a7a42702025775bb91d99181"} Jan 21 13:39:20 crc kubenswrapper[4765]: I0121 13:39:20.856903 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" Jan 21 13:39:20 crc kubenswrapper[4765]: I0121 13:39:20.951108 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ea0edfd-ace0-474e-b868-7ad5bed77cab-ssh-key-openstack-edpm-ipam\") pod \"8ea0edfd-ace0-474e-b868-7ad5bed77cab\" (UID: \"8ea0edfd-ace0-474e-b868-7ad5bed77cab\") " Jan 21 13:39:20 crc kubenswrapper[4765]: I0121 13:39:20.951684 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c992s\" (UniqueName: \"kubernetes.io/projected/8ea0edfd-ace0-474e-b868-7ad5bed77cab-kube-api-access-c992s\") pod \"8ea0edfd-ace0-474e-b868-7ad5bed77cab\" (UID: \"8ea0edfd-ace0-474e-b868-7ad5bed77cab\") " Jan 21 13:39:20 crc kubenswrapper[4765]: I0121 13:39:20.951754 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8ea0edfd-ace0-474e-b868-7ad5bed77cab-inventory-0\") pod \"8ea0edfd-ace0-474e-b868-7ad5bed77cab\" (UID: \"8ea0edfd-ace0-474e-b868-7ad5bed77cab\") " Jan 21 13:39:20 crc kubenswrapper[4765]: I0121 13:39:20.973466 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea0edfd-ace0-474e-b868-7ad5bed77cab-kube-api-access-c992s" (OuterVolumeSpecName: "kube-api-access-c992s") pod "8ea0edfd-ace0-474e-b868-7ad5bed77cab" (UID: "8ea0edfd-ace0-474e-b868-7ad5bed77cab"). InnerVolumeSpecName "kube-api-access-c992s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:39:20 crc kubenswrapper[4765]: I0121 13:39:20.980506 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea0edfd-ace0-474e-b868-7ad5bed77cab-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "8ea0edfd-ace0-474e-b868-7ad5bed77cab" (UID: "8ea0edfd-ace0-474e-b868-7ad5bed77cab"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.001195 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea0edfd-ace0-474e-b868-7ad5bed77cab-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8ea0edfd-ace0-474e-b868-7ad5bed77cab" (UID: "8ea0edfd-ace0-474e-b868-7ad5bed77cab"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.054711 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ea0edfd-ace0-474e-b868-7ad5bed77cab-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.054749 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c992s\" (UniqueName: \"kubernetes.io/projected/8ea0edfd-ace0-474e-b868-7ad5bed77cab-kube-api-access-c992s\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.054762 4765 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8ea0edfd-ace0-474e-b868-7ad5bed77cab-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.391122 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" event={"ID":"8ea0edfd-ace0-474e-b868-7ad5bed77cab","Type":"ContainerDied","Data":"301291aff50e95b65d237eb1b728f90b2dd037ae0d46e4f22733bcae90707f7a"} Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.391187 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="301291aff50e95b65d237eb1b728f90b2dd037ae0d46e4f22733bcae90707f7a" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.391328 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g6wfz" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.489671 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl"] Jan 21 13:39:21 crc kubenswrapper[4765]: E0121 13:39:21.490140 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea0edfd-ace0-474e-b868-7ad5bed77cab" containerName="ssh-known-hosts-edpm-deployment" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.490160 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea0edfd-ace0-474e-b868-7ad5bed77cab" containerName="ssh-known-hosts-edpm-deployment" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.490450 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea0edfd-ace0-474e-b868-7ad5bed77cab" containerName="ssh-known-hosts-edpm-deployment" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.491247 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.493250 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.493842 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.494096 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.495000 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.502896 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl"] Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.667316 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4de5f530-bcea-4203-8a79-9e9aebf97e0f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2kwpl\" (UID: \"4de5f530-bcea-4203-8a79-9e9aebf97e0f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.667390 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4de5f530-bcea-4203-8a79-9e9aebf97e0f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2kwpl\" (UID: \"4de5f530-bcea-4203-8a79-9e9aebf97e0f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.667431 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dsjz\" (UniqueName: \"kubernetes.io/projected/4de5f530-bcea-4203-8a79-9e9aebf97e0f-kube-api-access-4dsjz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2kwpl\" (UID: \"4de5f530-bcea-4203-8a79-9e9aebf97e0f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.769847 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4de5f530-bcea-4203-8a79-9e9aebf97e0f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2kwpl\" (UID: \"4de5f530-bcea-4203-8a79-9e9aebf97e0f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.770993 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4de5f530-bcea-4203-8a79-9e9aebf97e0f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2kwpl\" (UID: \"4de5f530-bcea-4203-8a79-9e9aebf97e0f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.771064 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dsjz\" (UniqueName: \"kubernetes.io/projected/4de5f530-bcea-4203-8a79-9e9aebf97e0f-kube-api-access-4dsjz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2kwpl\" (UID: \"4de5f530-bcea-4203-8a79-9e9aebf97e0f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.776978 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4de5f530-bcea-4203-8a79-9e9aebf97e0f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2kwpl\" (UID: \"4de5f530-bcea-4203-8a79-9e9aebf97e0f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.781878 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4de5f530-bcea-4203-8a79-9e9aebf97e0f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2kwpl\" (UID: \"4de5f530-bcea-4203-8a79-9e9aebf97e0f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.790643 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dsjz\" (UniqueName: \"kubernetes.io/projected/4de5f530-bcea-4203-8a79-9e9aebf97e0f-kube-api-access-4dsjz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2kwpl\" (UID: \"4de5f530-bcea-4203-8a79-9e9aebf97e0f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" Jan 21 13:39:21 crc kubenswrapper[4765]: I0121 13:39:21.809392 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" Jan 21 13:39:22 crc kubenswrapper[4765]: I0121 13:39:22.386569 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl"] Jan 21 13:39:22 crc kubenswrapper[4765]: I0121 13:39:22.402111 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" event={"ID":"4de5f530-bcea-4203-8a79-9e9aebf97e0f","Type":"ContainerStarted","Data":"7e27fd6a3a515b2c2420916e63b59160fb2e0518cb5d695495aabfc5b018a062"} Jan 21 13:39:23 crc kubenswrapper[4765]: I0121 13:39:23.410386 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" event={"ID":"4de5f530-bcea-4203-8a79-9e9aebf97e0f","Type":"ContainerStarted","Data":"8cdfca74c8694f739dd2f4eb36c0b035bf7d3dc69276b7ac20e1ae22cd1a9e0a"} Jan 21 13:39:23 crc kubenswrapper[4765]: I0121 13:39:23.433170 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" podStartSLOduration=1.971992614 podStartE2EDuration="2.433147566s" podCreationTimestamp="2026-01-21 13:39:21 +0000 UTC" firstStartedPulling="2026-01-21 13:39:22.382882852 +0000 UTC m=+2223.400608674" lastFinishedPulling="2026-01-21 13:39:22.844037804 +0000 UTC m=+2223.861763626" observedRunningTime="2026-01-21 13:39:23.425230697 +0000 UTC m=+2224.442956529" watchObservedRunningTime="2026-01-21 13:39:23.433147566 +0000 UTC m=+2224.450873388" Jan 21 13:39:32 crc kubenswrapper[4765]: I0121 13:39:32.495436 4765 generic.go:334] "Generic (PLEG): container finished" podID="4de5f530-bcea-4203-8a79-9e9aebf97e0f" containerID="8cdfca74c8694f739dd2f4eb36c0b035bf7d3dc69276b7ac20e1ae22cd1a9e0a" exitCode=0 Jan 21 13:39:32 crc kubenswrapper[4765]: I0121 13:39:32.495687 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" event={"ID":"4de5f530-bcea-4203-8a79-9e9aebf97e0f","Type":"ContainerDied","Data":"8cdfca74c8694f739dd2f4eb36c0b035bf7d3dc69276b7ac20e1ae22cd1a9e0a"} Jan 21 13:39:33 crc kubenswrapper[4765]: I0121 13:39:33.902947 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.045125 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4de5f530-bcea-4203-8a79-9e9aebf97e0f-inventory\") pod \"4de5f530-bcea-4203-8a79-9e9aebf97e0f\" (UID: \"4de5f530-bcea-4203-8a79-9e9aebf97e0f\") " Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.045279 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dsjz\" (UniqueName: \"kubernetes.io/projected/4de5f530-bcea-4203-8a79-9e9aebf97e0f-kube-api-access-4dsjz\") pod \"4de5f530-bcea-4203-8a79-9e9aebf97e0f\" (UID: \"4de5f530-bcea-4203-8a79-9e9aebf97e0f\") " Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.045370 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4de5f530-bcea-4203-8a79-9e9aebf97e0f-ssh-key-openstack-edpm-ipam\") pod \"4de5f530-bcea-4203-8a79-9e9aebf97e0f\" (UID: \"4de5f530-bcea-4203-8a79-9e9aebf97e0f\") " Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.051096 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de5f530-bcea-4203-8a79-9e9aebf97e0f-kube-api-access-4dsjz" (OuterVolumeSpecName: "kube-api-access-4dsjz") pod "4de5f530-bcea-4203-8a79-9e9aebf97e0f" (UID: "4de5f530-bcea-4203-8a79-9e9aebf97e0f"). InnerVolumeSpecName "kube-api-access-4dsjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.085626 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de5f530-bcea-4203-8a79-9e9aebf97e0f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4de5f530-bcea-4203-8a79-9e9aebf97e0f" (UID: "4de5f530-bcea-4203-8a79-9e9aebf97e0f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.085882 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de5f530-bcea-4203-8a79-9e9aebf97e0f-inventory" (OuterVolumeSpecName: "inventory") pod "4de5f530-bcea-4203-8a79-9e9aebf97e0f" (UID: "4de5f530-bcea-4203-8a79-9e9aebf97e0f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.149092 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dsjz\" (UniqueName: \"kubernetes.io/projected/4de5f530-bcea-4203-8a79-9e9aebf97e0f-kube-api-access-4dsjz\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.149159 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4de5f530-bcea-4203-8a79-9e9aebf97e0f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.149183 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4de5f530-bcea-4203-8a79-9e9aebf97e0f-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.516677 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" event={"ID":"4de5f530-bcea-4203-8a79-9e9aebf97e0f","Type":"ContainerDied","Data":"7e27fd6a3a515b2c2420916e63b59160fb2e0518cb5d695495aabfc5b018a062"} Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.516732 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e27fd6a3a515b2c2420916e63b59160fb2e0518cb5d695495aabfc5b018a062" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.516807 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2kwpl" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.601863 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb"] Jan 21 13:39:34 crc kubenswrapper[4765]: E0121 13:39:34.602282 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de5f530-bcea-4203-8a79-9e9aebf97e0f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.602295 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de5f530-bcea-4203-8a79-9e9aebf97e0f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.602472 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de5f530-bcea-4203-8a79-9e9aebf97e0f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.603074 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.608622 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.610758 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.615572 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.616842 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb"] Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.617462 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.759196 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmxp6\" (UniqueName: \"kubernetes.io/projected/b4fe3c7f-5af2-4efc-bd46-40f31624c194-kube-api-access-qmxp6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb\" (UID: \"b4fe3c7f-5af2-4efc-bd46-40f31624c194\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.759522 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4fe3c7f-5af2-4efc-bd46-40f31624c194-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb\" (UID: \"b4fe3c7f-5af2-4efc-bd46-40f31624c194\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.760403 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4fe3c7f-5af2-4efc-bd46-40f31624c194-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb\" (UID: \"b4fe3c7f-5af2-4efc-bd46-40f31624c194\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.861889 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4fe3c7f-5af2-4efc-bd46-40f31624c194-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb\" (UID: \"b4fe3c7f-5af2-4efc-bd46-40f31624c194\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.862289 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4fe3c7f-5af2-4efc-bd46-40f31624c194-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb\" (UID: \"b4fe3c7f-5af2-4efc-bd46-40f31624c194\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.862506 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmxp6\" (UniqueName: \"kubernetes.io/projected/b4fe3c7f-5af2-4efc-bd46-40f31624c194-kube-api-access-qmxp6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb\" (UID: \"b4fe3c7f-5af2-4efc-bd46-40f31624c194\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.866903 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4fe3c7f-5af2-4efc-bd46-40f31624c194-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb\" (UID: \"b4fe3c7f-5af2-4efc-bd46-40f31624c194\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.880056 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4fe3c7f-5af2-4efc-bd46-40f31624c194-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb\" (UID: \"b4fe3c7f-5af2-4efc-bd46-40f31624c194\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.882579 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmxp6\" (UniqueName: \"kubernetes.io/projected/b4fe3c7f-5af2-4efc-bd46-40f31624c194-kube-api-access-qmxp6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb\" (UID: \"b4fe3c7f-5af2-4efc-bd46-40f31624c194\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" Jan 21 13:39:34 crc kubenswrapper[4765]: I0121 13:39:34.921967 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" Jan 21 13:39:35 crc kubenswrapper[4765]: I0121 13:39:35.475164 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb"] Jan 21 13:39:35 crc kubenswrapper[4765]: I0121 13:39:35.529905 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" event={"ID":"b4fe3c7f-5af2-4efc-bd46-40f31624c194","Type":"ContainerStarted","Data":"b0e11b0dde30481ff3e1535155516e0008c6b4c23586def3c4fb06449ed34928"} Jan 21 13:39:36 crc kubenswrapper[4765]: I0121 13:39:36.539835 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" event={"ID":"b4fe3c7f-5af2-4efc-bd46-40f31624c194","Type":"ContainerStarted","Data":"c1087cbdae0bcb7eea533f5d8808f861065ab19825bed250fc28cd1b6aee1fc2"} Jan 21 13:39:36 crc kubenswrapper[4765]: I0121 13:39:36.558612 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" podStartSLOduration=1.95144559 podStartE2EDuration="2.558592264s" podCreationTimestamp="2026-01-21 13:39:34 +0000 UTC" firstStartedPulling="2026-01-21 13:39:35.483311056 +0000 UTC m=+2236.501036868" lastFinishedPulling="2026-01-21 13:39:36.09045771 +0000 UTC m=+2237.108183542" observedRunningTime="2026-01-21 13:39:36.554190136 +0000 UTC m=+2237.571915948" watchObservedRunningTime="2026-01-21 13:39:36.558592264 +0000 UTC m=+2237.576318086" Jan 21 13:39:47 crc kubenswrapper[4765]: E0121 13:39:47.066538 4765 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4fe3c7f_5af2_4efc_bd46_40f31624c194.slice/crio-conmon-c1087cbdae0bcb7eea533f5d8808f861065ab19825bed250fc28cd1b6aee1fc2.scope\": RecentStats: unable to find data in memory cache]" Jan 21 13:39:47 crc kubenswrapper[4765]: I0121 13:39:47.631601 4765 generic.go:334] "Generic (PLEG): container finished" podID="b4fe3c7f-5af2-4efc-bd46-40f31624c194" containerID="c1087cbdae0bcb7eea533f5d8808f861065ab19825bed250fc28cd1b6aee1fc2" exitCode=0 Jan 21 13:39:47 crc kubenswrapper[4765]: I0121 13:39:47.631683 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" event={"ID":"b4fe3c7f-5af2-4efc-bd46-40f31624c194","Type":"ContainerDied","Data":"c1087cbdae0bcb7eea533f5d8808f861065ab19825bed250fc28cd1b6aee1fc2"} Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.076884 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.157279 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmxp6\" (UniqueName: \"kubernetes.io/projected/b4fe3c7f-5af2-4efc-bd46-40f31624c194-kube-api-access-qmxp6\") pod \"b4fe3c7f-5af2-4efc-bd46-40f31624c194\" (UID: \"b4fe3c7f-5af2-4efc-bd46-40f31624c194\") " Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.157410 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4fe3c7f-5af2-4efc-bd46-40f31624c194-ssh-key-openstack-edpm-ipam\") pod \"b4fe3c7f-5af2-4efc-bd46-40f31624c194\" (UID: \"b4fe3c7f-5af2-4efc-bd46-40f31624c194\") " Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.157450 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4fe3c7f-5af2-4efc-bd46-40f31624c194-inventory\") pod \"b4fe3c7f-5af2-4efc-bd46-40f31624c194\" (UID: \"b4fe3c7f-5af2-4efc-bd46-40f31624c194\") " Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.166787 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4fe3c7f-5af2-4efc-bd46-40f31624c194-kube-api-access-qmxp6" (OuterVolumeSpecName: "kube-api-access-qmxp6") pod "b4fe3c7f-5af2-4efc-bd46-40f31624c194" (UID: "b4fe3c7f-5af2-4efc-bd46-40f31624c194"). InnerVolumeSpecName "kube-api-access-qmxp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.186467 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4fe3c7f-5af2-4efc-bd46-40f31624c194-inventory" (OuterVolumeSpecName: "inventory") pod "b4fe3c7f-5af2-4efc-bd46-40f31624c194" (UID: "b4fe3c7f-5af2-4efc-bd46-40f31624c194"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.189942 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4fe3c7f-5af2-4efc-bd46-40f31624c194-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b4fe3c7f-5af2-4efc-bd46-40f31624c194" (UID: "b4fe3c7f-5af2-4efc-bd46-40f31624c194"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.260387 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmxp6\" (UniqueName: \"kubernetes.io/projected/b4fe3c7f-5af2-4efc-bd46-40f31624c194-kube-api-access-qmxp6\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.260423 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4fe3c7f-5af2-4efc-bd46-40f31624c194-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.260439 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4fe3c7f-5af2-4efc-bd46-40f31624c194-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.646998 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" event={"ID":"b4fe3c7f-5af2-4efc-bd46-40f31624c194","Type":"ContainerDied","Data":"b0e11b0dde30481ff3e1535155516e0008c6b4c23586def3c4fb06449ed34928"} Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.647456 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0e11b0dde30481ff3e1535155516e0008c6b4c23586def3c4fb06449ed34928" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.647050 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.758672 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2"] Jan 21 13:39:49 crc kubenswrapper[4765]: E0121 13:39:49.759026 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fe3c7f-5af2-4efc-bd46-40f31624c194" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.759038 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fe3c7f-5af2-4efc-bd46-40f31624c194" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.759200 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4fe3c7f-5af2-4efc-bd46-40f31624c194" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.759792 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.764716 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.764932 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.765202 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.765304 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.775185 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.775388 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.775615 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.775620 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.785582 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2"] Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.874753 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.874841 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.874871 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.874899 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.874971 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.875001 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.875044 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.875120 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.875189 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.875369 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.875421 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv4fq\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-kube-api-access-hv4fq\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.875453 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.875473 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.875507 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.977745 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.977870 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.977912 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv4fq\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-kube-api-access-hv4fq\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.977945 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.977973 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.978004 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.978047 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.978109 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.978143 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.978177 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.978908 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.978981 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.979088 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.979148 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.984150 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.984930 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.988494 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.988937 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.989093 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.990029 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.993080 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.992403 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.994681 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:49 crc kubenswrapper[4765]: I0121 13:39:49.999786 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:50 crc kubenswrapper[4765]: I0121 13:39:50.001562 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:50 crc kubenswrapper[4765]: I0121 13:39:50.002105 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:50 crc kubenswrapper[4765]: I0121 13:39:50.010787 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:50 crc kubenswrapper[4765]: I0121 13:39:50.011937 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv4fq\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-kube-api-access-hv4fq\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:50 crc kubenswrapper[4765]: I0121 13:39:50.085397 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:39:50 crc kubenswrapper[4765]: I0121 13:39:50.628302 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2"] Jan 21 13:39:50 crc kubenswrapper[4765]: I0121 13:39:50.659172 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" event={"ID":"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf","Type":"ContainerStarted","Data":"60a408143421b2a228cfaaf70b1e4792879cfe5e913b8c6b998ef9ff3be854f5"} Jan 21 13:39:52 crc kubenswrapper[4765]: I0121 13:39:52.679781 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" event={"ID":"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf","Type":"ContainerStarted","Data":"801eb5906746540e5e46d2129474292d8289d063c858ad7a8c3215a0b6a8f053"} Jan 21 13:39:52 crc kubenswrapper[4765]: I0121 13:39:52.723262 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" podStartSLOduration=2.557868377 podStartE2EDuration="3.723207998s" podCreationTimestamp="2026-01-21 13:39:49 +0000 UTC" firstStartedPulling="2026-01-21 13:39:50.635564734 +0000 UTC m=+2251.653290556" lastFinishedPulling="2026-01-21 13:39:51.800904355 +0000 UTC m=+2252.818630177" observedRunningTime="2026-01-21 13:39:52.717326098 +0000 UTC m=+2253.735051920" watchObservedRunningTime="2026-01-21 13:39:52.723207998 +0000 UTC m=+2253.740933820" Jan 21 13:40:05 crc kubenswrapper[4765]: I0121 13:40:05.886134 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vljht"] Jan 21 13:40:05 crc kubenswrapper[4765]: I0121 13:40:05.891469 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:05 crc kubenswrapper[4765]: I0121 13:40:05.927304 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vljht"] Jan 21 13:40:06 crc kubenswrapper[4765]: I0121 13:40:06.047962 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-catalog-content\") pod \"redhat-marketplace-vljht\" (UID: \"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a\") " pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:06 crc kubenswrapper[4765]: I0121 13:40:06.049706 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-utilities\") pod \"redhat-marketplace-vljht\" (UID: \"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a\") " pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:06 crc kubenswrapper[4765]: I0121 13:40:06.050112 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8kj8\" (UniqueName: \"kubernetes.io/projected/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-kube-api-access-v8kj8\") pod \"redhat-marketplace-vljht\" (UID: \"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a\") " pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:06 crc kubenswrapper[4765]: I0121 13:40:06.151978 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-catalog-content\") pod \"redhat-marketplace-vljht\" (UID: \"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a\") " pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:06 crc kubenswrapper[4765]: I0121 13:40:06.152052 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-utilities\") pod \"redhat-marketplace-vljht\" (UID: \"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a\") " pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:06 crc kubenswrapper[4765]: I0121 13:40:06.152107 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8kj8\" (UniqueName: \"kubernetes.io/projected/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-kube-api-access-v8kj8\") pod \"redhat-marketplace-vljht\" (UID: \"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a\") " pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:06 crc kubenswrapper[4765]: I0121 13:40:06.152710 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-utilities\") pod \"redhat-marketplace-vljht\" (UID: \"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a\") " pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:06 crc kubenswrapper[4765]: I0121 13:40:06.152814 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-catalog-content\") pod \"redhat-marketplace-vljht\" (UID: \"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a\") " pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:06 crc kubenswrapper[4765]: I0121 13:40:06.191056 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8kj8\" (UniqueName: \"kubernetes.io/projected/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-kube-api-access-v8kj8\") pod \"redhat-marketplace-vljht\" (UID: \"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a\") " pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:06 crc kubenswrapper[4765]: I0121 13:40:06.221925 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:06 crc kubenswrapper[4765]: I0121 13:40:06.740648 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vljht"] Jan 21 13:40:06 crc kubenswrapper[4765]: I0121 13:40:06.809732 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljht" event={"ID":"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a","Type":"ContainerStarted","Data":"77915d061fdbd6bb20a05b861ab50b5d751b9a06826af922a1bce12b1d757240"} Jan 21 13:40:07 crc kubenswrapper[4765]: I0121 13:40:07.821929 4765 generic.go:334] "Generic (PLEG): container finished" podID="7ae7c754-c5ef-4e08-9836-e9a8103a9b5a" containerID="cfc1575e40843ef010fcf9ba57c07374c2ec6ee59f760ebc7e9a868334e357b3" exitCode=0 Jan 21 13:40:07 crc kubenswrapper[4765]: I0121 13:40:07.822037 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljht" event={"ID":"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a","Type":"ContainerDied","Data":"cfc1575e40843ef010fcf9ba57c07374c2ec6ee59f760ebc7e9a868334e357b3"} Jan 21 13:40:08 crc kubenswrapper[4765]: I0121 13:40:08.835235 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljht" event={"ID":"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a","Type":"ContainerStarted","Data":"a62163c2cfb97cca7c4e3712fbf0fd39e53d14ba3acba0f10ac3988701771838"} Jan 21 13:40:09 crc kubenswrapper[4765]: I0121 13:40:09.845712 4765 generic.go:334] "Generic (PLEG): container finished" podID="7ae7c754-c5ef-4e08-9836-e9a8103a9b5a" containerID="a62163c2cfb97cca7c4e3712fbf0fd39e53d14ba3acba0f10ac3988701771838" exitCode=0 Jan 21 13:40:09 crc kubenswrapper[4765]: I0121 13:40:09.845760 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljht" event={"ID":"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a","Type":"ContainerDied","Data":"a62163c2cfb97cca7c4e3712fbf0fd39e53d14ba3acba0f10ac3988701771838"} Jan 21 13:40:10 crc kubenswrapper[4765]: I0121 13:40:10.861716 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljht" event={"ID":"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a","Type":"ContainerStarted","Data":"b4bc298d37a368fd2a5b7f765b0d6574794362966fb499935d46b34e4b653b56"} Jan 21 13:40:10 crc kubenswrapper[4765]: I0121 13:40:10.893114 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vljht" podStartSLOduration=3.464452054 podStartE2EDuration="5.893087545s" podCreationTimestamp="2026-01-21 13:40:05 +0000 UTC" firstStartedPulling="2026-01-21 13:40:07.82525219 +0000 UTC m=+2268.842978022" lastFinishedPulling="2026-01-21 13:40:10.253887691 +0000 UTC m=+2271.271613513" observedRunningTime="2026-01-21 13:40:10.887148123 +0000 UTC m=+2271.904873945" watchObservedRunningTime="2026-01-21 13:40:10.893087545 +0000 UTC m=+2271.910813367" Jan 21 13:40:14 crc kubenswrapper[4765]: I0121 13:40:14.446028 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:40:14 crc kubenswrapper[4765]: I0121 13:40:14.446743 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:40:16 crc kubenswrapper[4765]: I0121 13:40:16.223120 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:16 crc kubenswrapper[4765]: I0121 13:40:16.224041 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:16 crc kubenswrapper[4765]: I0121 13:40:16.276235 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:16 crc kubenswrapper[4765]: I0121 13:40:16.972862 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:17 crc kubenswrapper[4765]: I0121 13:40:17.030406 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vljht"] Jan 21 13:40:18 crc kubenswrapper[4765]: I0121 13:40:18.931273 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vljht" podUID="7ae7c754-c5ef-4e08-9836-e9a8103a9b5a" containerName="registry-server" containerID="cri-o://b4bc298d37a368fd2a5b7f765b0d6574794362966fb499935d46b34e4b653b56" gracePeriod=2 Jan 21 13:40:19 crc kubenswrapper[4765]: I0121 13:40:19.899413 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:19 crc kubenswrapper[4765]: I0121 13:40:19.940817 4765 generic.go:334] "Generic (PLEG): container finished" podID="7ae7c754-c5ef-4e08-9836-e9a8103a9b5a" containerID="b4bc298d37a368fd2a5b7f765b0d6574794362966fb499935d46b34e4b653b56" exitCode=0 Jan 21 13:40:19 crc kubenswrapper[4765]: I0121 13:40:19.940946 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljht" event={"ID":"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a","Type":"ContainerDied","Data":"b4bc298d37a368fd2a5b7f765b0d6574794362966fb499935d46b34e4b653b56"} Jan 21 13:40:19 crc kubenswrapper[4765]: I0121 13:40:19.941831 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljht" event={"ID":"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a","Type":"ContainerDied","Data":"77915d061fdbd6bb20a05b861ab50b5d751b9a06826af922a1bce12b1d757240"} Jan 21 13:40:19 crc kubenswrapper[4765]: I0121 13:40:19.941901 4765 scope.go:117] "RemoveContainer" containerID="b4bc298d37a368fd2a5b7f765b0d6574794362966fb499935d46b34e4b653b56" Jan 21 13:40:19 crc kubenswrapper[4765]: I0121 13:40:19.941009 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vljht" Jan 21 13:40:19 crc kubenswrapper[4765]: I0121 13:40:19.967918 4765 scope.go:117] "RemoveContainer" containerID="a62163c2cfb97cca7c4e3712fbf0fd39e53d14ba3acba0f10ac3988701771838" Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.005041 4765 scope.go:117] "RemoveContainer" containerID="cfc1575e40843ef010fcf9ba57c07374c2ec6ee59f760ebc7e9a868334e357b3" Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.034689 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8kj8\" (UniqueName: \"kubernetes.io/projected/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-kube-api-access-v8kj8\") pod \"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a\" (UID: \"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a\") " Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.034825 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-utilities\") pod \"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a\" (UID: \"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a\") " Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.034860 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-catalog-content\") pod \"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a\" (UID: \"7ae7c754-c5ef-4e08-9836-e9a8103a9b5a\") " Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.036053 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-utilities" (OuterVolumeSpecName: "utilities") pod "7ae7c754-c5ef-4e08-9836-e9a8103a9b5a" (UID: "7ae7c754-c5ef-4e08-9836-e9a8103a9b5a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.044351 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-kube-api-access-v8kj8" (OuterVolumeSpecName: "kube-api-access-v8kj8") pod "7ae7c754-c5ef-4e08-9836-e9a8103a9b5a" (UID: "7ae7c754-c5ef-4e08-9836-e9a8103a9b5a"). InnerVolumeSpecName "kube-api-access-v8kj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.052006 4765 scope.go:117] "RemoveContainer" containerID="b4bc298d37a368fd2a5b7f765b0d6574794362966fb499935d46b34e4b653b56" Jan 21 13:40:20 crc kubenswrapper[4765]: E0121 13:40:20.052732 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4bc298d37a368fd2a5b7f765b0d6574794362966fb499935d46b34e4b653b56\": container with ID starting with b4bc298d37a368fd2a5b7f765b0d6574794362966fb499935d46b34e4b653b56 not found: ID does not exist" containerID="b4bc298d37a368fd2a5b7f765b0d6574794362966fb499935d46b34e4b653b56" Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.052794 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4bc298d37a368fd2a5b7f765b0d6574794362966fb499935d46b34e4b653b56"} err="failed to get container status \"b4bc298d37a368fd2a5b7f765b0d6574794362966fb499935d46b34e4b653b56\": rpc error: code = NotFound desc = could not find container \"b4bc298d37a368fd2a5b7f765b0d6574794362966fb499935d46b34e4b653b56\": container with ID starting with b4bc298d37a368fd2a5b7f765b0d6574794362966fb499935d46b34e4b653b56 not found: ID does not exist" Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.052843 4765 scope.go:117] "RemoveContainer" containerID="a62163c2cfb97cca7c4e3712fbf0fd39e53d14ba3acba0f10ac3988701771838" Jan 21 13:40:20 crc kubenswrapper[4765]: E0121 13:40:20.053444 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a62163c2cfb97cca7c4e3712fbf0fd39e53d14ba3acba0f10ac3988701771838\": container with ID starting with a62163c2cfb97cca7c4e3712fbf0fd39e53d14ba3acba0f10ac3988701771838 not found: ID does not exist" containerID="a62163c2cfb97cca7c4e3712fbf0fd39e53d14ba3acba0f10ac3988701771838" Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.053486 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a62163c2cfb97cca7c4e3712fbf0fd39e53d14ba3acba0f10ac3988701771838"} err="failed to get container status \"a62163c2cfb97cca7c4e3712fbf0fd39e53d14ba3acba0f10ac3988701771838\": rpc error: code = NotFound desc = could not find container \"a62163c2cfb97cca7c4e3712fbf0fd39e53d14ba3acba0f10ac3988701771838\": container with ID starting with a62163c2cfb97cca7c4e3712fbf0fd39e53d14ba3acba0f10ac3988701771838 not found: ID does not exist" Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.053514 4765 scope.go:117] "RemoveContainer" containerID="cfc1575e40843ef010fcf9ba57c07374c2ec6ee59f760ebc7e9a868334e357b3" Jan 21 13:40:20 crc kubenswrapper[4765]: E0121 13:40:20.053960 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfc1575e40843ef010fcf9ba57c07374c2ec6ee59f760ebc7e9a868334e357b3\": container with ID starting with cfc1575e40843ef010fcf9ba57c07374c2ec6ee59f760ebc7e9a868334e357b3 not found: ID does not exist" containerID="cfc1575e40843ef010fcf9ba57c07374c2ec6ee59f760ebc7e9a868334e357b3" Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.054006 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc1575e40843ef010fcf9ba57c07374c2ec6ee59f760ebc7e9a868334e357b3"} err="failed to get container status \"cfc1575e40843ef010fcf9ba57c07374c2ec6ee59f760ebc7e9a868334e357b3\": rpc error: code = NotFound desc = could not find container \"cfc1575e40843ef010fcf9ba57c07374c2ec6ee59f760ebc7e9a868334e357b3\": container with ID starting with cfc1575e40843ef010fcf9ba57c07374c2ec6ee59f760ebc7e9a868334e357b3 not found: ID does not exist" Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.058247 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ae7c754-c5ef-4e08-9836-e9a8103a9b5a" (UID: "7ae7c754-c5ef-4e08-9836-e9a8103a9b5a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.137623 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8kj8\" (UniqueName: \"kubernetes.io/projected/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-kube-api-access-v8kj8\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.137658 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.137667 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.511098 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vljht"] Jan 21 13:40:20 crc kubenswrapper[4765]: I0121 13:40:20.532619 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vljht"] Jan 21 13:40:21 crc kubenswrapper[4765]: I0121 13:40:21.627041 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae7c754-c5ef-4e08-9836-e9a8103a9b5a" path="/var/lib/kubelet/pods/7ae7c754-c5ef-4e08-9836-e9a8103a9b5a/volumes" Jan 21 13:40:32 crc kubenswrapper[4765]: I0121 13:40:32.065745 4765 generic.go:334] "Generic (PLEG): container finished" podID="833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" containerID="801eb5906746540e5e46d2129474292d8289d063c858ad7a8c3215a0b6a8f053" exitCode=0 Jan 21 13:40:32 crc kubenswrapper[4765]: I0121 13:40:32.065799 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" event={"ID":"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf","Type":"ContainerDied","Data":"801eb5906746540e5e46d2129474292d8289d063c858ad7a8c3215a0b6a8f053"} Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.473050 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.607920 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.608017 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-ovn-default-certs-0\") pod \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.608052 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.608108 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv4fq\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-kube-api-access-hv4fq\") pod \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.608182 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-telemetry-combined-ca-bundle\") pod \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.608936 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-nova-combined-ca-bundle\") pod \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.609005 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-ovn-combined-ca-bundle\") pod \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.609035 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-repo-setup-combined-ca-bundle\") pod \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.609141 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-bootstrap-combined-ca-bundle\") pod \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.609174 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-ssh-key-openstack-edpm-ipam\") pod \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.609235 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-libvirt-combined-ca-bundle\") pod \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.609286 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.609316 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-neutron-metadata-combined-ca-bundle\") pod \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.609371 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-inventory\") pod \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\" (UID: \"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf\") " Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.616831 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" (UID: "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.618447 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" (UID: "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.621331 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" (UID: "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.623108 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" (UID: "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.624591 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" (UID: "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.624684 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" (UID: "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.624819 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" (UID: "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.624962 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-kube-api-access-hv4fq" (OuterVolumeSpecName: "kube-api-access-hv4fq") pod "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" (UID: "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf"). InnerVolumeSpecName "kube-api-access-hv4fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.626942 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" (UID: "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.627554 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" (UID: "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.628198 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" (UID: "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.636348 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" (UID: "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.646503 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" (UID: "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.646957 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-inventory" (OuterVolumeSpecName: "inventory") pod "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" (UID: "833e4a2d-2bcb-4dfe-90ba-2e239625d5bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.711984 4765 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.712016 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.712029 4765 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.712042 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.713019 4765 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.713302 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.713376 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.713396 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.713408 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.713447 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv4fq\" (UniqueName: \"kubernetes.io/projected/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-kube-api-access-hv4fq\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.713461 4765 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.713474 4765 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.713487 4765 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:33 crc kubenswrapper[4765]: I0121 13:40:33.713498 4765 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/833e4a2d-2bcb-4dfe-90ba-2e239625d5bf-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.094330 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" event={"ID":"833e4a2d-2bcb-4dfe-90ba-2e239625d5bf","Type":"ContainerDied","Data":"60a408143421b2a228cfaaf70b1e4792879cfe5e913b8c6b998ef9ff3be854f5"} Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.094379 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60a408143421b2a228cfaaf70b1e4792879cfe5e913b8c6b998ef9ff3be854f5" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.094421 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.234400 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn"] Jan 21 13:40:34 crc kubenswrapper[4765]: E0121 13:40:34.235024 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.235122 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 13:40:34 crc kubenswrapper[4765]: E0121 13:40:34.235243 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae7c754-c5ef-4e08-9836-e9a8103a9b5a" containerName="extract-utilities" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.235356 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae7c754-c5ef-4e08-9836-e9a8103a9b5a" containerName="extract-utilities" Jan 21 13:40:34 crc kubenswrapper[4765]: E0121 13:40:34.235458 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae7c754-c5ef-4e08-9836-e9a8103a9b5a" containerName="extract-content" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.235547 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae7c754-c5ef-4e08-9836-e9a8103a9b5a" containerName="extract-content" Jan 21 13:40:34 crc kubenswrapper[4765]: E0121 13:40:34.235627 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae7c754-c5ef-4e08-9836-e9a8103a9b5a" containerName="registry-server" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.235702 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae7c754-c5ef-4e08-9836-e9a8103a9b5a" containerName="registry-server" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.236033 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="833e4a2d-2bcb-4dfe-90ba-2e239625d5bf" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.236139 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae7c754-c5ef-4e08-9836-e9a8103a9b5a" containerName="registry-server" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.237081 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.241109 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.241179 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.241432 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.241472 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.241595 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.247899 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn"] Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.325797 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqnjn\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.326277 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqnjn\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.326433 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqnjn\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.326525 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xntkg\" (UniqueName: \"kubernetes.io/projected/db5e6d29-c1aa-4a16-99a9-e2d559619d90-kube-api-access-xntkg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqnjn\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.326606 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqnjn\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.428772 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqnjn\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.428889 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqnjn\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.428953 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xntkg\" (UniqueName: \"kubernetes.io/projected/db5e6d29-c1aa-4a16-99a9-e2d559619d90-kube-api-access-xntkg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqnjn\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.428993 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqnjn\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.429098 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqnjn\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.430570 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqnjn\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.433340 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqnjn\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.433884 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqnjn\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.437161 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqnjn\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.447338 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xntkg\" (UniqueName: \"kubernetes.io/projected/db5e6d29-c1aa-4a16-99a9-e2d559619d90-kube-api-access-xntkg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-cqnjn\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:34 crc kubenswrapper[4765]: I0121 13:40:34.553511 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:40:35 crc kubenswrapper[4765]: I0121 13:40:35.109987 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn"] Jan 21 13:40:36 crc kubenswrapper[4765]: I0121 13:40:36.122756 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" event={"ID":"db5e6d29-c1aa-4a16-99a9-e2d559619d90","Type":"ContainerStarted","Data":"47507bf511b8a853b2973e80ce18bb2472572327980f0f2500f782f275647939"} Jan 21 13:40:36 crc kubenswrapper[4765]: I0121 13:40:36.123272 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" event={"ID":"db5e6d29-c1aa-4a16-99a9-e2d559619d90","Type":"ContainerStarted","Data":"6a2fb51af4b4de1a1068330bb6637d6eb8f622947ef47d53188295308e7b5c69"} Jan 21 13:40:36 crc kubenswrapper[4765]: I0121 13:40:36.146172 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" podStartSLOduration=1.6688970699999999 podStartE2EDuration="2.146154649s" podCreationTimestamp="2026-01-21 13:40:34 +0000 UTC" firstStartedPulling="2026-01-21 13:40:35.119942383 +0000 UTC m=+2296.137668205" lastFinishedPulling="2026-01-21 13:40:35.597199962 +0000 UTC m=+2296.614925784" observedRunningTime="2026-01-21 13:40:36.141832824 +0000 UTC m=+2297.159558676" watchObservedRunningTime="2026-01-21 13:40:36.146154649 +0000 UTC m=+2297.163880471" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.477952 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tzw2b"] Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.480780 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.486398 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tzw2b"] Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.631275 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a98c710-b854-4617-86b8-9c40f1ac12a4-utilities\") pod \"community-operators-tzw2b\" (UID: \"8a98c710-b854-4617-86b8-9c40f1ac12a4\") " pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.631327 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a98c710-b854-4617-86b8-9c40f1ac12a4-catalog-content\") pod \"community-operators-tzw2b\" (UID: \"8a98c710-b854-4617-86b8-9c40f1ac12a4\") " pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.632175 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m8vk\" (UniqueName: \"kubernetes.io/projected/8a98c710-b854-4617-86b8-9c40f1ac12a4-kube-api-access-5m8vk\") pod \"community-operators-tzw2b\" (UID: \"8a98c710-b854-4617-86b8-9c40f1ac12a4\") " pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.675707 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-snvlh"] Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.677994 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.687271 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-snvlh"] Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.734536 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66586db-9068-4225-a160-a18efc519fad-utilities\") pod \"certified-operators-snvlh\" (UID: \"f66586db-9068-4225-a160-a18efc519fad\") " pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.734608 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwxdp\" (UniqueName: \"kubernetes.io/projected/f66586db-9068-4225-a160-a18efc519fad-kube-api-access-rwxdp\") pod \"certified-operators-snvlh\" (UID: \"f66586db-9068-4225-a160-a18efc519fad\") " pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.734710 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a98c710-b854-4617-86b8-9c40f1ac12a4-utilities\") pod \"community-operators-tzw2b\" (UID: \"8a98c710-b854-4617-86b8-9c40f1ac12a4\") " pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.735077 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a98c710-b854-4617-86b8-9c40f1ac12a4-catalog-content\") pod \"community-operators-tzw2b\" (UID: \"8a98c710-b854-4617-86b8-9c40f1ac12a4\") " pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.735182 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a98c710-b854-4617-86b8-9c40f1ac12a4-utilities\") pod \"community-operators-tzw2b\" (UID: \"8a98c710-b854-4617-86b8-9c40f1ac12a4\") " pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.735435 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66586db-9068-4225-a160-a18efc519fad-catalog-content\") pod \"certified-operators-snvlh\" (UID: \"f66586db-9068-4225-a160-a18efc519fad\") " pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.735443 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a98c710-b854-4617-86b8-9c40f1ac12a4-catalog-content\") pod \"community-operators-tzw2b\" (UID: \"8a98c710-b854-4617-86b8-9c40f1ac12a4\") " pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.735587 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m8vk\" (UniqueName: \"kubernetes.io/projected/8a98c710-b854-4617-86b8-9c40f1ac12a4-kube-api-access-5m8vk\") pod \"community-operators-tzw2b\" (UID: \"8a98c710-b854-4617-86b8-9c40f1ac12a4\") " pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.761263 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m8vk\" (UniqueName: \"kubernetes.io/projected/8a98c710-b854-4617-86b8-9c40f1ac12a4-kube-api-access-5m8vk\") pod \"community-operators-tzw2b\" (UID: \"8a98c710-b854-4617-86b8-9c40f1ac12a4\") " pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.808590 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.837087 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66586db-9068-4225-a160-a18efc519fad-catalog-content\") pod \"certified-operators-snvlh\" (UID: \"f66586db-9068-4225-a160-a18efc519fad\") " pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.837441 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66586db-9068-4225-a160-a18efc519fad-utilities\") pod \"certified-operators-snvlh\" (UID: \"f66586db-9068-4225-a160-a18efc519fad\") " pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.837504 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwxdp\" (UniqueName: \"kubernetes.io/projected/f66586db-9068-4225-a160-a18efc519fad-kube-api-access-rwxdp\") pod \"certified-operators-snvlh\" (UID: \"f66586db-9068-4225-a160-a18efc519fad\") " pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.837910 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66586db-9068-4225-a160-a18efc519fad-catalog-content\") pod \"certified-operators-snvlh\" (UID: \"f66586db-9068-4225-a160-a18efc519fad\") " pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.841621 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66586db-9068-4225-a160-a18efc519fad-utilities\") pod \"certified-operators-snvlh\" (UID: \"f66586db-9068-4225-a160-a18efc519fad\") " pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:40:40 crc kubenswrapper[4765]: I0121 13:40:40.857630 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwxdp\" (UniqueName: \"kubernetes.io/projected/f66586db-9068-4225-a160-a18efc519fad-kube-api-access-rwxdp\") pod \"certified-operators-snvlh\" (UID: \"f66586db-9068-4225-a160-a18efc519fad\") " pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:40:41 crc kubenswrapper[4765]: I0121 13:40:41.002785 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:40:41 crc kubenswrapper[4765]: I0121 13:40:41.331530 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tzw2b"] Jan 21 13:40:41 crc kubenswrapper[4765]: W0121 13:40:41.344434 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a98c710_b854_4617_86b8_9c40f1ac12a4.slice/crio-a55d7de651953b4a2867e0de4038a2454ba79d1a8c6b8d6c5b56240a5b794e4b WatchSource:0}: Error finding container a55d7de651953b4a2867e0de4038a2454ba79d1a8c6b8d6c5b56240a5b794e4b: Status 404 returned error can't find the container with id a55d7de651953b4a2867e0de4038a2454ba79d1a8c6b8d6c5b56240a5b794e4b Jan 21 13:40:41 crc kubenswrapper[4765]: I0121 13:40:41.814499 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-snvlh"] Jan 21 13:40:42 crc kubenswrapper[4765]: I0121 13:40:42.287748 4765 generic.go:334] "Generic (PLEG): container finished" podID="f66586db-9068-4225-a160-a18efc519fad" containerID="0cf7267db570d1e1fc73c9baadfdaee901d772acf173a2744986e5a1e9326e0e" exitCode=0 Jan 21 13:40:42 crc kubenswrapper[4765]: I0121 13:40:42.288019 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snvlh" event={"ID":"f66586db-9068-4225-a160-a18efc519fad","Type":"ContainerDied","Data":"0cf7267db570d1e1fc73c9baadfdaee901d772acf173a2744986e5a1e9326e0e"} Jan 21 13:40:42 crc kubenswrapper[4765]: I0121 13:40:42.288048 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snvlh" event={"ID":"f66586db-9068-4225-a160-a18efc519fad","Type":"ContainerStarted","Data":"fca5c524ecceb4d4ff76b0b24b343a1d463cf28741df3803b8751eb7cead2f81"} Jan 21 13:40:42 crc kubenswrapper[4765]: I0121 13:40:42.290457 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:40:42 crc kubenswrapper[4765]: I0121 13:40:42.292102 4765 generic.go:334] "Generic (PLEG): container finished" podID="8a98c710-b854-4617-86b8-9c40f1ac12a4" containerID="55ac22ef123541edd7d30bdfac38035f3546bc595e8290c3bf39f44463f2154e" exitCode=0 Jan 21 13:40:42 crc kubenswrapper[4765]: I0121 13:40:42.292162 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzw2b" event={"ID":"8a98c710-b854-4617-86b8-9c40f1ac12a4","Type":"ContainerDied","Data":"55ac22ef123541edd7d30bdfac38035f3546bc595e8290c3bf39f44463f2154e"} Jan 21 13:40:42 crc kubenswrapper[4765]: I0121 13:40:42.292227 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzw2b" event={"ID":"8a98c710-b854-4617-86b8-9c40f1ac12a4","Type":"ContainerStarted","Data":"a55d7de651953b4a2867e0de4038a2454ba79d1a8c6b8d6c5b56240a5b794e4b"} Jan 21 13:40:44 crc kubenswrapper[4765]: I0121 13:40:44.446251 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:40:44 crc kubenswrapper[4765]: I0121 13:40:44.446546 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:40:46 crc kubenswrapper[4765]: I0121 13:40:46.332065 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzw2b" event={"ID":"8a98c710-b854-4617-86b8-9c40f1ac12a4","Type":"ContainerStarted","Data":"d41e63087382c9e21cd36e8f5663f3dd1f9e17d4dee235b55fb47b0ebbcc4a33"} Jan 21 13:40:46 crc kubenswrapper[4765]: I0121 13:40:46.334781 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snvlh" event={"ID":"f66586db-9068-4225-a160-a18efc519fad","Type":"ContainerStarted","Data":"913032644b9aafb0e0d393456118c298b5962b123d0aa0bfa7806f341421fdfb"} Jan 21 13:40:47 crc kubenswrapper[4765]: I0121 13:40:47.346091 4765 generic.go:334] "Generic (PLEG): container finished" podID="8a98c710-b854-4617-86b8-9c40f1ac12a4" containerID="d41e63087382c9e21cd36e8f5663f3dd1f9e17d4dee235b55fb47b0ebbcc4a33" exitCode=0 Jan 21 13:40:47 crc kubenswrapper[4765]: I0121 13:40:47.346171 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzw2b" event={"ID":"8a98c710-b854-4617-86b8-9c40f1ac12a4","Type":"ContainerDied","Data":"d41e63087382c9e21cd36e8f5663f3dd1f9e17d4dee235b55fb47b0ebbcc4a33"} Jan 21 13:40:47 crc kubenswrapper[4765]: I0121 13:40:47.348580 4765 generic.go:334] "Generic (PLEG): container finished" podID="f66586db-9068-4225-a160-a18efc519fad" containerID="913032644b9aafb0e0d393456118c298b5962b123d0aa0bfa7806f341421fdfb" exitCode=0 Jan 21 13:40:47 crc kubenswrapper[4765]: I0121 13:40:47.348622 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snvlh" event={"ID":"f66586db-9068-4225-a160-a18efc519fad","Type":"ContainerDied","Data":"913032644b9aafb0e0d393456118c298b5962b123d0aa0bfa7806f341421fdfb"} Jan 21 13:40:48 crc kubenswrapper[4765]: I0121 13:40:48.362601 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snvlh" event={"ID":"f66586db-9068-4225-a160-a18efc519fad","Type":"ContainerStarted","Data":"23db87dc5550b21704ee24f451409b10a1785a1c86983ccba1be036fda7fc6fe"} Jan 21 13:40:48 crc kubenswrapper[4765]: I0121 13:40:48.366936 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzw2b" event={"ID":"8a98c710-b854-4617-86b8-9c40f1ac12a4","Type":"ContainerStarted","Data":"56bd6113044fc66c6e46975984ec60d86909afc3ea0c000604891c1e67ed4626"} Jan 21 13:40:48 crc kubenswrapper[4765]: I0121 13:40:48.388290 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-snvlh" podStartSLOduration=2.806554775 podStartE2EDuration="8.388272563s" podCreationTimestamp="2026-01-21 13:40:40 +0000 UTC" firstStartedPulling="2026-01-21 13:40:42.290273495 +0000 UTC m=+2303.307999307" lastFinishedPulling="2026-01-21 13:40:47.871991283 +0000 UTC m=+2308.889717095" observedRunningTime="2026-01-21 13:40:48.382973229 +0000 UTC m=+2309.400699051" watchObservedRunningTime="2026-01-21 13:40:48.388272563 +0000 UTC m=+2309.405998385" Jan 21 13:40:48 crc kubenswrapper[4765]: I0121 13:40:48.416765 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tzw2b" podStartSLOduration=2.807045919 podStartE2EDuration="8.416747379s" podCreationTimestamp="2026-01-21 13:40:40 +0000 UTC" firstStartedPulling="2026-01-21 13:40:42.294320822 +0000 UTC m=+2303.312046644" lastFinishedPulling="2026-01-21 13:40:47.904022282 +0000 UTC m=+2308.921748104" observedRunningTime="2026-01-21 13:40:48.410485117 +0000 UTC m=+2309.428210939" watchObservedRunningTime="2026-01-21 13:40:48.416747379 +0000 UTC m=+2309.434473201" Jan 21 13:40:50 crc kubenswrapper[4765]: I0121 13:40:50.809888 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:40:50 crc kubenswrapper[4765]: I0121 13:40:50.810465 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:40:50 crc kubenswrapper[4765]: I0121 13:40:50.856996 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:40:51 crc kubenswrapper[4765]: I0121 13:40:51.003382 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:40:51 crc kubenswrapper[4765]: I0121 13:40:51.003441 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:40:52 crc kubenswrapper[4765]: I0121 13:40:52.053826 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-snvlh" podUID="f66586db-9068-4225-a160-a18efc519fad" containerName="registry-server" probeResult="failure" output=< Jan 21 13:40:52 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Jan 21 13:40:52 crc kubenswrapper[4765]: > Jan 21 13:41:00 crc kubenswrapper[4765]: I0121 13:41:00.898426 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:41:00 crc kubenswrapper[4765]: I0121 13:41:00.984337 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tzw2b"] Jan 21 13:41:01 crc kubenswrapper[4765]: I0121 13:41:01.050927 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:41:01 crc kubenswrapper[4765]: I0121 13:41:01.096805 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:41:01 crc kubenswrapper[4765]: I0121 13:41:01.483118 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tzw2b" podUID="8a98c710-b854-4617-86b8-9c40f1ac12a4" containerName="registry-server" containerID="cri-o://56bd6113044fc66c6e46975984ec60d86909afc3ea0c000604891c1e67ed4626" gracePeriod=2 Jan 21 13:41:01 crc kubenswrapper[4765]: I0121 13:41:01.926565 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.026010 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m8vk\" (UniqueName: \"kubernetes.io/projected/8a98c710-b854-4617-86b8-9c40f1ac12a4-kube-api-access-5m8vk\") pod \"8a98c710-b854-4617-86b8-9c40f1ac12a4\" (UID: \"8a98c710-b854-4617-86b8-9c40f1ac12a4\") " Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.026085 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a98c710-b854-4617-86b8-9c40f1ac12a4-catalog-content\") pod \"8a98c710-b854-4617-86b8-9c40f1ac12a4\" (UID: \"8a98c710-b854-4617-86b8-9c40f1ac12a4\") " Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.026133 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a98c710-b854-4617-86b8-9c40f1ac12a4-utilities\") pod \"8a98c710-b854-4617-86b8-9c40f1ac12a4\" (UID: \"8a98c710-b854-4617-86b8-9c40f1ac12a4\") " Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.027051 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a98c710-b854-4617-86b8-9c40f1ac12a4-utilities" (OuterVolumeSpecName: "utilities") pod "8a98c710-b854-4617-86b8-9c40f1ac12a4" (UID: "8a98c710-b854-4617-86b8-9c40f1ac12a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.027666 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a98c710-b854-4617-86b8-9c40f1ac12a4-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.049460 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a98c710-b854-4617-86b8-9c40f1ac12a4-kube-api-access-5m8vk" (OuterVolumeSpecName: "kube-api-access-5m8vk") pod "8a98c710-b854-4617-86b8-9c40f1ac12a4" (UID: "8a98c710-b854-4617-86b8-9c40f1ac12a4"). InnerVolumeSpecName "kube-api-access-5m8vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.087742 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a98c710-b854-4617-86b8-9c40f1ac12a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a98c710-b854-4617-86b8-9c40f1ac12a4" (UID: "8a98c710-b854-4617-86b8-9c40f1ac12a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.129687 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m8vk\" (UniqueName: \"kubernetes.io/projected/8a98c710-b854-4617-86b8-9c40f1ac12a4-kube-api-access-5m8vk\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.129722 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a98c710-b854-4617-86b8-9c40f1ac12a4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.492134 4765 generic.go:334] "Generic (PLEG): container finished" podID="8a98c710-b854-4617-86b8-9c40f1ac12a4" containerID="56bd6113044fc66c6e46975984ec60d86909afc3ea0c000604891c1e67ed4626" exitCode=0 Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.492216 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzw2b" event={"ID":"8a98c710-b854-4617-86b8-9c40f1ac12a4","Type":"ContainerDied","Data":"56bd6113044fc66c6e46975984ec60d86909afc3ea0c000604891c1e67ed4626"} Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.492902 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzw2b" event={"ID":"8a98c710-b854-4617-86b8-9c40f1ac12a4","Type":"ContainerDied","Data":"a55d7de651953b4a2867e0de4038a2454ba79d1a8c6b8d6c5b56240a5b794e4b"} Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.492197 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzw2b" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.492971 4765 scope.go:117] "RemoveContainer" containerID="56bd6113044fc66c6e46975984ec60d86909afc3ea0c000604891c1e67ed4626" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.530903 4765 scope.go:117] "RemoveContainer" containerID="d41e63087382c9e21cd36e8f5663f3dd1f9e17d4dee235b55fb47b0ebbcc4a33" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.547108 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tzw2b"] Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.566469 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tzw2b"] Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.577381 4765 scope.go:117] "RemoveContainer" containerID="55ac22ef123541edd7d30bdfac38035f3546bc595e8290c3bf39f44463f2154e" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.613114 4765 scope.go:117] "RemoveContainer" containerID="56bd6113044fc66c6e46975984ec60d86909afc3ea0c000604891c1e67ed4626" Jan 21 13:41:02 crc kubenswrapper[4765]: E0121 13:41:02.614257 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56bd6113044fc66c6e46975984ec60d86909afc3ea0c000604891c1e67ed4626\": container with ID starting with 56bd6113044fc66c6e46975984ec60d86909afc3ea0c000604891c1e67ed4626 not found: ID does not exist" containerID="56bd6113044fc66c6e46975984ec60d86909afc3ea0c000604891c1e67ed4626" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.614310 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56bd6113044fc66c6e46975984ec60d86909afc3ea0c000604891c1e67ed4626"} err="failed to get container status \"56bd6113044fc66c6e46975984ec60d86909afc3ea0c000604891c1e67ed4626\": rpc error: code = NotFound desc = could not find container \"56bd6113044fc66c6e46975984ec60d86909afc3ea0c000604891c1e67ed4626\": container with ID starting with 56bd6113044fc66c6e46975984ec60d86909afc3ea0c000604891c1e67ed4626 not found: ID does not exist" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.614339 4765 scope.go:117] "RemoveContainer" containerID="d41e63087382c9e21cd36e8f5663f3dd1f9e17d4dee235b55fb47b0ebbcc4a33" Jan 21 13:41:02 crc kubenswrapper[4765]: E0121 13:41:02.617625 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d41e63087382c9e21cd36e8f5663f3dd1f9e17d4dee235b55fb47b0ebbcc4a33\": container with ID starting with d41e63087382c9e21cd36e8f5663f3dd1f9e17d4dee235b55fb47b0ebbcc4a33 not found: ID does not exist" containerID="d41e63087382c9e21cd36e8f5663f3dd1f9e17d4dee235b55fb47b0ebbcc4a33" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.617698 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41e63087382c9e21cd36e8f5663f3dd1f9e17d4dee235b55fb47b0ebbcc4a33"} err="failed to get container status \"d41e63087382c9e21cd36e8f5663f3dd1f9e17d4dee235b55fb47b0ebbcc4a33\": rpc error: code = NotFound desc = could not find container \"d41e63087382c9e21cd36e8f5663f3dd1f9e17d4dee235b55fb47b0ebbcc4a33\": container with ID starting with d41e63087382c9e21cd36e8f5663f3dd1f9e17d4dee235b55fb47b0ebbcc4a33 not found: ID does not exist" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.617732 4765 scope.go:117] "RemoveContainer" containerID="55ac22ef123541edd7d30bdfac38035f3546bc595e8290c3bf39f44463f2154e" Jan 21 13:41:02 crc kubenswrapper[4765]: E0121 13:41:02.618162 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55ac22ef123541edd7d30bdfac38035f3546bc595e8290c3bf39f44463f2154e\": container with ID starting with 55ac22ef123541edd7d30bdfac38035f3546bc595e8290c3bf39f44463f2154e not found: ID does not exist" containerID="55ac22ef123541edd7d30bdfac38035f3546bc595e8290c3bf39f44463f2154e" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.618195 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55ac22ef123541edd7d30bdfac38035f3546bc595e8290c3bf39f44463f2154e"} err="failed to get container status \"55ac22ef123541edd7d30bdfac38035f3546bc595e8290c3bf39f44463f2154e\": rpc error: code = NotFound desc = could not find container \"55ac22ef123541edd7d30bdfac38035f3546bc595e8290c3bf39f44463f2154e\": container with ID starting with 55ac22ef123541edd7d30bdfac38035f3546bc595e8290c3bf39f44463f2154e not found: ID does not exist" Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.950545 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-snvlh"] Jan 21 13:41:02 crc kubenswrapper[4765]: I0121 13:41:02.950827 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-snvlh" podUID="f66586db-9068-4225-a160-a18efc519fad" containerName="registry-server" containerID="cri-o://23db87dc5550b21704ee24f451409b10a1785a1c86983ccba1be036fda7fc6fe" gracePeriod=2 Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.409327 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.502508 4765 generic.go:334] "Generic (PLEG): container finished" podID="f66586db-9068-4225-a160-a18efc519fad" containerID="23db87dc5550b21704ee24f451409b10a1785a1c86983ccba1be036fda7fc6fe" exitCode=0 Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.502577 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snvlh" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.502602 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snvlh" event={"ID":"f66586db-9068-4225-a160-a18efc519fad","Type":"ContainerDied","Data":"23db87dc5550b21704ee24f451409b10a1785a1c86983ccba1be036fda7fc6fe"} Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.502664 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snvlh" event={"ID":"f66586db-9068-4225-a160-a18efc519fad","Type":"ContainerDied","Data":"fca5c524ecceb4d4ff76b0b24b343a1d463cf28741df3803b8751eb7cead2f81"} Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.502687 4765 scope.go:117] "RemoveContainer" containerID="23db87dc5550b21704ee24f451409b10a1785a1c86983ccba1be036fda7fc6fe" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.522576 4765 scope.go:117] "RemoveContainer" containerID="913032644b9aafb0e0d393456118c298b5962b123d0aa0bfa7806f341421fdfb" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.539888 4765 scope.go:117] "RemoveContainer" containerID="0cf7267db570d1e1fc73c9baadfdaee901d772acf173a2744986e5a1e9326e0e" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.563353 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwxdp\" (UniqueName: \"kubernetes.io/projected/f66586db-9068-4225-a160-a18efc519fad-kube-api-access-rwxdp\") pod \"f66586db-9068-4225-a160-a18efc519fad\" (UID: \"f66586db-9068-4225-a160-a18efc519fad\") " Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.564360 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66586db-9068-4225-a160-a18efc519fad-catalog-content\") pod \"f66586db-9068-4225-a160-a18efc519fad\" (UID: \"f66586db-9068-4225-a160-a18efc519fad\") " Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.564565 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66586db-9068-4225-a160-a18efc519fad-utilities\") pod \"f66586db-9068-4225-a160-a18efc519fad\" (UID: \"f66586db-9068-4225-a160-a18efc519fad\") " Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.565325 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f66586db-9068-4225-a160-a18efc519fad-utilities" (OuterVolumeSpecName: "utilities") pod "f66586db-9068-4225-a160-a18efc519fad" (UID: "f66586db-9068-4225-a160-a18efc519fad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.573475 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66586db-9068-4225-a160-a18efc519fad-kube-api-access-rwxdp" (OuterVolumeSpecName: "kube-api-access-rwxdp") pod "f66586db-9068-4225-a160-a18efc519fad" (UID: "f66586db-9068-4225-a160-a18efc519fad"). InnerVolumeSpecName "kube-api-access-rwxdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.583261 4765 scope.go:117] "RemoveContainer" containerID="23db87dc5550b21704ee24f451409b10a1785a1c86983ccba1be036fda7fc6fe" Jan 21 13:41:03 crc kubenswrapper[4765]: E0121 13:41:03.583915 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23db87dc5550b21704ee24f451409b10a1785a1c86983ccba1be036fda7fc6fe\": container with ID starting with 23db87dc5550b21704ee24f451409b10a1785a1c86983ccba1be036fda7fc6fe not found: ID does not exist" containerID="23db87dc5550b21704ee24f451409b10a1785a1c86983ccba1be036fda7fc6fe" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.583955 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23db87dc5550b21704ee24f451409b10a1785a1c86983ccba1be036fda7fc6fe"} err="failed to get container status \"23db87dc5550b21704ee24f451409b10a1785a1c86983ccba1be036fda7fc6fe\": rpc error: code = NotFound desc = could not find container \"23db87dc5550b21704ee24f451409b10a1785a1c86983ccba1be036fda7fc6fe\": container with ID starting with 23db87dc5550b21704ee24f451409b10a1785a1c86983ccba1be036fda7fc6fe not found: ID does not exist" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.583978 4765 scope.go:117] "RemoveContainer" containerID="913032644b9aafb0e0d393456118c298b5962b123d0aa0bfa7806f341421fdfb" Jan 21 13:41:03 crc kubenswrapper[4765]: E0121 13:41:03.584177 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"913032644b9aafb0e0d393456118c298b5962b123d0aa0bfa7806f341421fdfb\": container with ID starting with 913032644b9aafb0e0d393456118c298b5962b123d0aa0bfa7806f341421fdfb not found: ID does not exist" containerID="913032644b9aafb0e0d393456118c298b5962b123d0aa0bfa7806f341421fdfb" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.584221 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"913032644b9aafb0e0d393456118c298b5962b123d0aa0bfa7806f341421fdfb"} err="failed to get container status \"913032644b9aafb0e0d393456118c298b5962b123d0aa0bfa7806f341421fdfb\": rpc error: code = NotFound desc = could not find container \"913032644b9aafb0e0d393456118c298b5962b123d0aa0bfa7806f341421fdfb\": container with ID starting with 913032644b9aafb0e0d393456118c298b5962b123d0aa0bfa7806f341421fdfb not found: ID does not exist" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.584238 4765 scope.go:117] "RemoveContainer" containerID="0cf7267db570d1e1fc73c9baadfdaee901d772acf173a2744986e5a1e9326e0e" Jan 21 13:41:03 crc kubenswrapper[4765]: E0121 13:41:03.584418 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cf7267db570d1e1fc73c9baadfdaee901d772acf173a2744986e5a1e9326e0e\": container with ID starting with 0cf7267db570d1e1fc73c9baadfdaee901d772acf173a2744986e5a1e9326e0e not found: ID does not exist" containerID="0cf7267db570d1e1fc73c9baadfdaee901d772acf173a2744986e5a1e9326e0e" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.584437 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cf7267db570d1e1fc73c9baadfdaee901d772acf173a2744986e5a1e9326e0e"} err="failed to get container status \"0cf7267db570d1e1fc73c9baadfdaee901d772acf173a2744986e5a1e9326e0e\": rpc error: code = NotFound desc = could not find container \"0cf7267db570d1e1fc73c9baadfdaee901d772acf173a2744986e5a1e9326e0e\": container with ID starting with 0cf7267db570d1e1fc73c9baadfdaee901d772acf173a2744986e5a1e9326e0e not found: ID does not exist" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.613803 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f66586db-9068-4225-a160-a18efc519fad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f66586db-9068-4225-a160-a18efc519fad" (UID: "f66586db-9068-4225-a160-a18efc519fad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.624437 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a98c710-b854-4617-86b8-9c40f1ac12a4" path="/var/lib/kubelet/pods/8a98c710-b854-4617-86b8-9c40f1ac12a4/volumes" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.666694 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f66586db-9068-4225-a160-a18efc519fad-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.666735 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwxdp\" (UniqueName: \"kubernetes.io/projected/f66586db-9068-4225-a160-a18efc519fad-kube-api-access-rwxdp\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.666747 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f66586db-9068-4225-a160-a18efc519fad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.834759 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-snvlh"] Jan 21 13:41:03 crc kubenswrapper[4765]: I0121 13:41:03.843154 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-snvlh"] Jan 21 13:41:05 crc kubenswrapper[4765]: I0121 13:41:05.629708 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66586db-9068-4225-a160-a18efc519fad" path="/var/lib/kubelet/pods/f66586db-9068-4225-a160-a18efc519fad/volumes" Jan 21 13:41:14 crc kubenswrapper[4765]: I0121 13:41:14.445777 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:41:14 crc kubenswrapper[4765]: I0121 13:41:14.446469 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:41:14 crc kubenswrapper[4765]: I0121 13:41:14.446529 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:41:14 crc kubenswrapper[4765]: I0121 13:41:14.447549 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:41:14 crc kubenswrapper[4765]: I0121 13:41:14.447616 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" gracePeriod=600 Jan 21 13:41:14 crc kubenswrapper[4765]: E0121 13:41:14.575659 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:41:14 crc kubenswrapper[4765]: I0121 13:41:14.612608 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" exitCode=0 Jan 21 13:41:14 crc kubenswrapper[4765]: I0121 13:41:14.612685 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0"} Jan 21 13:41:14 crc kubenswrapper[4765]: I0121 13:41:14.612737 4765 scope.go:117] "RemoveContainer" containerID="8be9c6b30eac9194fe69597ddad7819ab0f25189067a0149bf0d2a68338af1f4" Jan 21 13:41:14 crc kubenswrapper[4765]: I0121 13:41:14.614092 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:41:14 crc kubenswrapper[4765]: E0121 13:41:14.615203 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:41:28 crc kubenswrapper[4765]: I0121 13:41:28.614081 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:41:28 crc kubenswrapper[4765]: E0121 13:41:28.615859 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:41:40 crc kubenswrapper[4765]: I0121 13:41:40.613451 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:41:40 crc kubenswrapper[4765]: E0121 13:41:40.614165 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:41:45 crc kubenswrapper[4765]: I0121 13:41:45.886943 4765 generic.go:334] "Generic (PLEG): container finished" podID="db5e6d29-c1aa-4a16-99a9-e2d559619d90" containerID="47507bf511b8a853b2973e80ce18bb2472572327980f0f2500f782f275647939" exitCode=0 Jan 21 13:41:45 crc kubenswrapper[4765]: I0121 13:41:45.887039 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" event={"ID":"db5e6d29-c1aa-4a16-99a9-e2d559619d90","Type":"ContainerDied","Data":"47507bf511b8a853b2973e80ce18bb2472572327980f0f2500f782f275647939"} Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.288683 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.354114 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xntkg\" (UniqueName: \"kubernetes.io/projected/db5e6d29-c1aa-4a16-99a9-e2d559619d90-kube-api-access-xntkg\") pod \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.354185 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ovn-combined-ca-bundle\") pod \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.354303 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-inventory\") pod \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.354353 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ovncontroller-config-0\") pod \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.354395 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ssh-key-openstack-edpm-ipam\") pod \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\" (UID: \"db5e6d29-c1aa-4a16-99a9-e2d559619d90\") " Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.363341 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "db5e6d29-c1aa-4a16-99a9-e2d559619d90" (UID: "db5e6d29-c1aa-4a16-99a9-e2d559619d90"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.363357 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db5e6d29-c1aa-4a16-99a9-e2d559619d90-kube-api-access-xntkg" (OuterVolumeSpecName: "kube-api-access-xntkg") pod "db5e6d29-c1aa-4a16-99a9-e2d559619d90" (UID: "db5e6d29-c1aa-4a16-99a9-e2d559619d90"). InnerVolumeSpecName "kube-api-access-xntkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.381711 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "db5e6d29-c1aa-4a16-99a9-e2d559619d90" (UID: "db5e6d29-c1aa-4a16-99a9-e2d559619d90"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.387603 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-inventory" (OuterVolumeSpecName: "inventory") pod "db5e6d29-c1aa-4a16-99a9-e2d559619d90" (UID: "db5e6d29-c1aa-4a16-99a9-e2d559619d90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.391284 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "db5e6d29-c1aa-4a16-99a9-e2d559619d90" (UID: "db5e6d29-c1aa-4a16-99a9-e2d559619d90"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.456562 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.456604 4765 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.456616 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.456626 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xntkg\" (UniqueName: \"kubernetes.io/projected/db5e6d29-c1aa-4a16-99a9-e2d559619d90-kube-api-access-xntkg\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.456635 4765 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e6d29-c1aa-4a16-99a9-e2d559619d90-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.907002 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" event={"ID":"db5e6d29-c1aa-4a16-99a9-e2d559619d90","Type":"ContainerDied","Data":"6a2fb51af4b4de1a1068330bb6637d6eb8f622947ef47d53188295308e7b5c69"} Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.907035 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a2fb51af4b4de1a1068330bb6637d6eb8f622947ef47d53188295308e7b5c69" Jan 21 13:41:47 crc kubenswrapper[4765]: I0121 13:41:47.907069 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-cqnjn" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.007138 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs"] Jan 21 13:41:48 crc kubenswrapper[4765]: E0121 13:41:48.007655 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66586db-9068-4225-a160-a18efc519fad" containerName="extract-utilities" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.007678 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66586db-9068-4225-a160-a18efc519fad" containerName="extract-utilities" Jan 21 13:41:48 crc kubenswrapper[4765]: E0121 13:41:48.007689 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66586db-9068-4225-a160-a18efc519fad" containerName="extract-content" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.007699 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66586db-9068-4225-a160-a18efc519fad" containerName="extract-content" Jan 21 13:41:48 crc kubenswrapper[4765]: E0121 13:41:48.007708 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a98c710-b854-4617-86b8-9c40f1ac12a4" containerName="registry-server" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.007717 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a98c710-b854-4617-86b8-9c40f1ac12a4" containerName="registry-server" Jan 21 13:41:48 crc kubenswrapper[4765]: E0121 13:41:48.007742 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66586db-9068-4225-a160-a18efc519fad" containerName="registry-server" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.007750 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66586db-9068-4225-a160-a18efc519fad" containerName="registry-server" Jan 21 13:41:48 crc kubenswrapper[4765]: E0121 13:41:48.007767 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db5e6d29-c1aa-4a16-99a9-e2d559619d90" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.007775 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="db5e6d29-c1aa-4a16-99a9-e2d559619d90" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 13:41:48 crc kubenswrapper[4765]: E0121 13:41:48.007785 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a98c710-b854-4617-86b8-9c40f1ac12a4" containerName="extract-content" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.007794 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a98c710-b854-4617-86b8-9c40f1ac12a4" containerName="extract-content" Jan 21 13:41:48 crc kubenswrapper[4765]: E0121 13:41:48.007826 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a98c710-b854-4617-86b8-9c40f1ac12a4" containerName="extract-utilities" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.007834 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a98c710-b854-4617-86b8-9c40f1ac12a4" containerName="extract-utilities" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.008077 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66586db-9068-4225-a160-a18efc519fad" containerName="registry-server" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.008102 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="db5e6d29-c1aa-4a16-99a9-e2d559619d90" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.008122 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a98c710-b854-4617-86b8-9c40f1ac12a4" containerName="registry-server" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.008998 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.014947 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.015229 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.015501 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.015642 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.015743 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.015802 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.025838 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs"] Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.067599 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.067677 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.067710 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.067817 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g9f9\" (UniqueName: \"kubernetes.io/projected/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-kube-api-access-2g9f9\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.067940 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.068040 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.169448 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.169521 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.169550 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.169576 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g9f9\" (UniqueName: \"kubernetes.io/projected/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-kube-api-access-2g9f9\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.169615 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.169654 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.174992 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.175077 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.175171 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.175187 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.184851 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.202103 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g9f9\" (UniqueName: \"kubernetes.io/projected/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-kube-api-access-2g9f9\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.328093 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.716649 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs"] Jan 21 13:41:48 crc kubenswrapper[4765]: I0121 13:41:48.919295 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" event={"ID":"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8","Type":"ContainerStarted","Data":"7dd85fedbd215cea84593c267bfb3a8b89a6bdae2e5d703a5b621026cc236422"} Jan 21 13:41:49 crc kubenswrapper[4765]: I0121 13:41:49.967690 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" event={"ID":"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8","Type":"ContainerStarted","Data":"923a7e9df43f76e29e3f390b4411b4eca9e94db6f66736b21eb3ba1166baacd1"} Jan 21 13:41:55 crc kubenswrapper[4765]: I0121 13:41:55.614072 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:41:55 crc kubenswrapper[4765]: E0121 13:41:55.615869 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:42:08 crc kubenswrapper[4765]: I0121 13:42:08.613494 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:42:08 crc kubenswrapper[4765]: E0121 13:42:08.614389 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:42:19 crc kubenswrapper[4765]: I0121 13:42:19.625584 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:42:19 crc kubenswrapper[4765]: E0121 13:42:19.626968 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:42:21 crc kubenswrapper[4765]: I0121 13:42:21.235554 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-9bbb7" podUID="7c5b52bd-6cb5-4544-9c7d-b374210ae44d" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 13:42:33 crc kubenswrapper[4765]: I0121 13:42:33.614557 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:42:33 crc kubenswrapper[4765]: E0121 13:42:33.615523 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:42:43 crc kubenswrapper[4765]: I0121 13:42:43.462761 4765 generic.go:334] "Generic (PLEG): container finished" podID="8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8" containerID="923a7e9df43f76e29e3f390b4411b4eca9e94db6f66736b21eb3ba1166baacd1" exitCode=0 Jan 21 13:42:43 crc kubenswrapper[4765]: I0121 13:42:43.462814 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" event={"ID":"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8","Type":"ContainerDied","Data":"923a7e9df43f76e29e3f390b4411b4eca9e94db6f66736b21eb3ba1166baacd1"} Jan 21 13:42:44 crc kubenswrapper[4765]: I0121 13:42:44.928494 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.061751 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-ssh-key-openstack-edpm-ipam\") pod \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.061916 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-neutron-metadata-combined-ca-bundle\") pod \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.061995 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g9f9\" (UniqueName: \"kubernetes.io/projected/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-kube-api-access-2g9f9\") pod \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.062139 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-nova-metadata-neutron-config-0\") pod \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.062260 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-inventory\") pod \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.062361 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\" (UID: \"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8\") " Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.068426 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-kube-api-access-2g9f9" (OuterVolumeSpecName: "kube-api-access-2g9f9") pod "8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8" (UID: "8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8"). InnerVolumeSpecName "kube-api-access-2g9f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.078552 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8" (UID: "8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.091897 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8" (UID: "8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.097162 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8" (UID: "8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.098039 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-inventory" (OuterVolumeSpecName: "inventory") pod "8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8" (UID: "8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.099504 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8" (UID: "8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.164779 4765 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.164814 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.164831 4765 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.164844 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g9f9\" (UniqueName: \"kubernetes.io/projected/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-kube-api-access-2g9f9\") on node \"crc\" DevicePath \"\"" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.164857 4765 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.164879 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.485413 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" event={"ID":"8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8","Type":"ContainerDied","Data":"7dd85fedbd215cea84593c267bfb3a8b89a6bdae2e5d703a5b621026cc236422"} Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.485461 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dd85fedbd215cea84593c267bfb3a8b89a6bdae2e5d703a5b621026cc236422" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.485523 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.637569 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl"] Jan 21 13:42:45 crc kubenswrapper[4765]: E0121 13:42:45.638132 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.638159 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.638418 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.639251 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.642468 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.642714 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.642744 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.642798 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.643965 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.655131 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl"] Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.777531 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.777639 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.777708 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.777734 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.777756 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv4xt\" (UniqueName: \"kubernetes.io/projected/26624762-8a2d-4273-9f09-73895227b65c-kube-api-access-jv4xt\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.880258 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.880559 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.880646 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.880680 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.880707 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv4xt\" (UniqueName: \"kubernetes.io/projected/26624762-8a2d-4273-9f09-73895227b65c-kube-api-access-jv4xt\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.885188 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.885826 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.886260 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.890172 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.907363 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv4xt\" (UniqueName: \"kubernetes.io/projected/26624762-8a2d-4273-9f09-73895227b65c-kube-api-access-jv4xt\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:45 crc kubenswrapper[4765]: I0121 13:42:45.956544 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:42:46 crc kubenswrapper[4765]: I0121 13:42:46.516922 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl"] Jan 21 13:42:47 crc kubenswrapper[4765]: I0121 13:42:47.503502 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" event={"ID":"26624762-8a2d-4273-9f09-73895227b65c","Type":"ContainerStarted","Data":"66697b404362306ef521b0e665cb9ff92fe75326542a0e58ef9cde4b140ccf0f"} Jan 21 13:42:47 crc kubenswrapper[4765]: I0121 13:42:47.613804 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:42:47 crc kubenswrapper[4765]: E0121 13:42:47.614061 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:42:50 crc kubenswrapper[4765]: I0121 13:42:50.531475 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" event={"ID":"26624762-8a2d-4273-9f09-73895227b65c","Type":"ContainerStarted","Data":"e635f59ea298c26715b8f9c7c3bdcea1186ee9e2b52cdb6e512fff293da9c326"} Jan 21 13:42:51 crc kubenswrapper[4765]: I0121 13:42:51.562007 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" podStartSLOduration=3.572624163 podStartE2EDuration="6.561988058s" podCreationTimestamp="2026-01-21 13:42:45 +0000 UTC" firstStartedPulling="2026-01-21 13:42:46.514068859 +0000 UTC m=+2427.531794671" lastFinishedPulling="2026-01-21 13:42:49.503432744 +0000 UTC m=+2430.521158566" observedRunningTime="2026-01-21 13:42:51.557608851 +0000 UTC m=+2432.575334673" watchObservedRunningTime="2026-01-21 13:42:51.561988058 +0000 UTC m=+2432.579713880" Jan 21 13:42:59 crc kubenswrapper[4765]: I0121 13:42:59.619385 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:42:59 crc kubenswrapper[4765]: E0121 13:42:59.620102 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:43:14 crc kubenswrapper[4765]: I0121 13:43:14.615174 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:43:14 crc kubenswrapper[4765]: E0121 13:43:14.616162 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:43:25 crc kubenswrapper[4765]: I0121 13:43:25.616324 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:43:25 crc kubenswrapper[4765]: E0121 13:43:25.617951 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:43:37 crc kubenswrapper[4765]: I0121 13:43:37.613503 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:43:37 crc kubenswrapper[4765]: E0121 13:43:37.614390 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:43:49 crc kubenswrapper[4765]: I0121 13:43:49.621845 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:43:49 crc kubenswrapper[4765]: E0121 13:43:49.622715 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:44:01 crc kubenswrapper[4765]: I0121 13:44:01.614079 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:44:01 crc kubenswrapper[4765]: E0121 13:44:01.615005 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:44:15 crc kubenswrapper[4765]: I0121 13:44:15.614343 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:44:15 crc kubenswrapper[4765]: E0121 13:44:15.615142 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:44:29 crc kubenswrapper[4765]: I0121 13:44:29.614053 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:44:29 crc kubenswrapper[4765]: E0121 13:44:29.614999 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:44:43 crc kubenswrapper[4765]: I0121 13:44:43.615526 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:44:43 crc kubenswrapper[4765]: E0121 13:44:43.616587 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:44:58 crc kubenswrapper[4765]: I0121 13:44:58.614119 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:44:58 crc kubenswrapper[4765]: E0121 13:44:58.614902 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:45:00 crc kubenswrapper[4765]: I0121 13:45:00.149365 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2"] Jan 21 13:45:00 crc kubenswrapper[4765]: I0121 13:45:00.152487 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" Jan 21 13:45:00 crc kubenswrapper[4765]: I0121 13:45:00.155552 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:45:00 crc kubenswrapper[4765]: I0121 13:45:00.162198 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2"] Jan 21 13:45:00 crc kubenswrapper[4765]: I0121 13:45:00.164765 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:45:00 crc kubenswrapper[4765]: I0121 13:45:00.227019 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f522848-d230-4e8f-ab07-184fbb92483a-secret-volume\") pod \"collect-profiles-29483385-wh6c2\" (UID: \"4f522848-d230-4e8f-ab07-184fbb92483a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" Jan 21 13:45:00 crc kubenswrapper[4765]: I0121 13:45:00.227150 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f522848-d230-4e8f-ab07-184fbb92483a-config-volume\") pod \"collect-profiles-29483385-wh6c2\" (UID: \"4f522848-d230-4e8f-ab07-184fbb92483a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" Jan 21 13:45:00 crc kubenswrapper[4765]: I0121 13:45:00.227191 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnrgd\" (UniqueName: \"kubernetes.io/projected/4f522848-d230-4e8f-ab07-184fbb92483a-kube-api-access-hnrgd\") pod \"collect-profiles-29483385-wh6c2\" (UID: \"4f522848-d230-4e8f-ab07-184fbb92483a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" Jan 21 13:45:00 crc kubenswrapper[4765]: I0121 13:45:00.329039 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f522848-d230-4e8f-ab07-184fbb92483a-secret-volume\") pod \"collect-profiles-29483385-wh6c2\" (UID: \"4f522848-d230-4e8f-ab07-184fbb92483a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" Jan 21 13:45:00 crc kubenswrapper[4765]: I0121 13:45:00.329153 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f522848-d230-4e8f-ab07-184fbb92483a-config-volume\") pod \"collect-profiles-29483385-wh6c2\" (UID: \"4f522848-d230-4e8f-ab07-184fbb92483a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" Jan 21 13:45:00 crc kubenswrapper[4765]: I0121 13:45:00.329189 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnrgd\" (UniqueName: \"kubernetes.io/projected/4f522848-d230-4e8f-ab07-184fbb92483a-kube-api-access-hnrgd\") pod \"collect-profiles-29483385-wh6c2\" (UID: \"4f522848-d230-4e8f-ab07-184fbb92483a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" Jan 21 13:45:00 crc kubenswrapper[4765]: I0121 13:45:00.330624 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f522848-d230-4e8f-ab07-184fbb92483a-config-volume\") pod \"collect-profiles-29483385-wh6c2\" (UID: \"4f522848-d230-4e8f-ab07-184fbb92483a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" Jan 21 13:45:00 crc kubenswrapper[4765]: I0121 13:45:00.335481 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f522848-d230-4e8f-ab07-184fbb92483a-secret-volume\") pod \"collect-profiles-29483385-wh6c2\" (UID: \"4f522848-d230-4e8f-ab07-184fbb92483a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" Jan 21 13:45:00 crc kubenswrapper[4765]: I0121 13:45:00.347622 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnrgd\" (UniqueName: \"kubernetes.io/projected/4f522848-d230-4e8f-ab07-184fbb92483a-kube-api-access-hnrgd\") pod \"collect-profiles-29483385-wh6c2\" (UID: \"4f522848-d230-4e8f-ab07-184fbb92483a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" Jan 21 13:45:00 crc kubenswrapper[4765]: I0121 13:45:00.482845 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" Jan 21 13:45:01 crc kubenswrapper[4765]: I0121 13:45:01.004273 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2"] Jan 21 13:45:01 crc kubenswrapper[4765]: I0121 13:45:01.795127 4765 generic.go:334] "Generic (PLEG): container finished" podID="4f522848-d230-4e8f-ab07-184fbb92483a" containerID="ba0c6a33437089c4b0830dab9774ec3214d11cc21f8b90e967892ff39cadf2ec" exitCode=0 Jan 21 13:45:01 crc kubenswrapper[4765]: I0121 13:45:01.795197 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" event={"ID":"4f522848-d230-4e8f-ab07-184fbb92483a","Type":"ContainerDied","Data":"ba0c6a33437089c4b0830dab9774ec3214d11cc21f8b90e967892ff39cadf2ec"} Jan 21 13:45:01 crc kubenswrapper[4765]: I0121 13:45:01.795536 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" event={"ID":"4f522848-d230-4e8f-ab07-184fbb92483a","Type":"ContainerStarted","Data":"2012e4a331872fd57c287422af61326a44d8b1832aae7c80428624b1dbd09d74"} Jan 21 13:45:03 crc kubenswrapper[4765]: I0121 13:45:03.117979 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" Jan 21 13:45:03 crc kubenswrapper[4765]: I0121 13:45:03.178761 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f522848-d230-4e8f-ab07-184fbb92483a-secret-volume\") pod \"4f522848-d230-4e8f-ab07-184fbb92483a\" (UID: \"4f522848-d230-4e8f-ab07-184fbb92483a\") " Jan 21 13:45:03 crc kubenswrapper[4765]: I0121 13:45:03.178825 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnrgd\" (UniqueName: \"kubernetes.io/projected/4f522848-d230-4e8f-ab07-184fbb92483a-kube-api-access-hnrgd\") pod \"4f522848-d230-4e8f-ab07-184fbb92483a\" (UID: \"4f522848-d230-4e8f-ab07-184fbb92483a\") " Jan 21 13:45:03 crc kubenswrapper[4765]: I0121 13:45:03.178984 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f522848-d230-4e8f-ab07-184fbb92483a-config-volume\") pod \"4f522848-d230-4e8f-ab07-184fbb92483a\" (UID: \"4f522848-d230-4e8f-ab07-184fbb92483a\") " Jan 21 13:45:03 crc kubenswrapper[4765]: I0121 13:45:03.180164 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f522848-d230-4e8f-ab07-184fbb92483a-config-volume" (OuterVolumeSpecName: "config-volume") pod "4f522848-d230-4e8f-ab07-184fbb92483a" (UID: "4f522848-d230-4e8f-ab07-184fbb92483a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:45:03 crc kubenswrapper[4765]: I0121 13:45:03.187751 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f522848-d230-4e8f-ab07-184fbb92483a-kube-api-access-hnrgd" (OuterVolumeSpecName: "kube-api-access-hnrgd") pod "4f522848-d230-4e8f-ab07-184fbb92483a" (UID: "4f522848-d230-4e8f-ab07-184fbb92483a"). InnerVolumeSpecName "kube-api-access-hnrgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:45:03 crc kubenswrapper[4765]: I0121 13:45:03.192736 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f522848-d230-4e8f-ab07-184fbb92483a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4f522848-d230-4e8f-ab07-184fbb92483a" (UID: "4f522848-d230-4e8f-ab07-184fbb92483a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:45:03 crc kubenswrapper[4765]: I0121 13:45:03.281137 4765 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f522848-d230-4e8f-ab07-184fbb92483a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:03 crc kubenswrapper[4765]: I0121 13:45:03.281218 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnrgd\" (UniqueName: \"kubernetes.io/projected/4f522848-d230-4e8f-ab07-184fbb92483a-kube-api-access-hnrgd\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:03 crc kubenswrapper[4765]: I0121 13:45:03.281239 4765 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f522848-d230-4e8f-ab07-184fbb92483a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:03 crc kubenswrapper[4765]: I0121 13:45:03.818470 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" event={"ID":"4f522848-d230-4e8f-ab07-184fbb92483a","Type":"ContainerDied","Data":"2012e4a331872fd57c287422af61326a44d8b1832aae7c80428624b1dbd09d74"} Jan 21 13:45:03 crc kubenswrapper[4765]: I0121 13:45:03.818817 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2012e4a331872fd57c287422af61326a44d8b1832aae7c80428624b1dbd09d74" Jan 21 13:45:03 crc kubenswrapper[4765]: I0121 13:45:03.818534 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-wh6c2" Jan 21 13:45:04 crc kubenswrapper[4765]: I0121 13:45:04.204381 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd"] Jan 21 13:45:04 crc kubenswrapper[4765]: I0121 13:45:04.213866 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-pnjtd"] Jan 21 13:45:05 crc kubenswrapper[4765]: I0121 13:45:05.637115 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b" path="/var/lib/kubelet/pods/561fd39d-3fe4-4cb7-9ed7-1e92f7d8e13b/volumes" Jan 21 13:45:09 crc kubenswrapper[4765]: I0121 13:45:09.622317 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:45:09 crc kubenswrapper[4765]: E0121 13:45:09.624142 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:45:22 crc kubenswrapper[4765]: I0121 13:45:22.614147 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:45:22 crc kubenswrapper[4765]: E0121 13:45:22.615042 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:45:36 crc kubenswrapper[4765]: I0121 13:45:36.613481 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:45:36 crc kubenswrapper[4765]: E0121 13:45:36.614167 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:45:38 crc kubenswrapper[4765]: I0121 13:45:38.144483 4765 scope.go:117] "RemoveContainer" containerID="1e1de584e78b0855b3a075eca7aab4239fac9a586c44eb22c777801b59307bc5" Jan 21 13:45:49 crc kubenswrapper[4765]: I0121 13:45:49.621870 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:45:49 crc kubenswrapper[4765]: E0121 13:45:49.623818 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:46:04 crc kubenswrapper[4765]: I0121 13:46:04.614069 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:46:04 crc kubenswrapper[4765]: E0121 13:46:04.614840 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:46:17 crc kubenswrapper[4765]: I0121 13:46:17.614917 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:46:18 crc kubenswrapper[4765]: I0121 13:46:18.587619 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"289eeaf139eef3057016b49afbca88f96cc90417dca0a155ef85620d4bfd08bb"} Jan 21 13:47:22 crc kubenswrapper[4765]: I0121 13:47:22.230815 4765 generic.go:334] "Generic (PLEG): container finished" podID="26624762-8a2d-4273-9f09-73895227b65c" containerID="e635f59ea298c26715b8f9c7c3bdcea1186ee9e2b52cdb6e512fff293da9c326" exitCode=0 Jan 21 13:47:22 crc kubenswrapper[4765]: I0121 13:47:22.230909 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" event={"ID":"26624762-8a2d-4273-9f09-73895227b65c","Type":"ContainerDied","Data":"e635f59ea298c26715b8f9c7c3bdcea1186ee9e2b52cdb6e512fff293da9c326"} Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.805866 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.814688 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-libvirt-secret-0\") pod \"26624762-8a2d-4273-9f09-73895227b65c\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.814740 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-ssh-key-openstack-edpm-ipam\") pod \"26624762-8a2d-4273-9f09-73895227b65c\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.814800 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv4xt\" (UniqueName: \"kubernetes.io/projected/26624762-8a2d-4273-9f09-73895227b65c-kube-api-access-jv4xt\") pod \"26624762-8a2d-4273-9f09-73895227b65c\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.814845 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-libvirt-combined-ca-bundle\") pod \"26624762-8a2d-4273-9f09-73895227b65c\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.814889 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-inventory\") pod \"26624762-8a2d-4273-9f09-73895227b65c\" (UID: \"26624762-8a2d-4273-9f09-73895227b65c\") " Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.820049 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "26624762-8a2d-4273-9f09-73895227b65c" (UID: "26624762-8a2d-4273-9f09-73895227b65c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.832002 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26624762-8a2d-4273-9f09-73895227b65c-kube-api-access-jv4xt" (OuterVolumeSpecName: "kube-api-access-jv4xt") pod "26624762-8a2d-4273-9f09-73895227b65c" (UID: "26624762-8a2d-4273-9f09-73895227b65c"). InnerVolumeSpecName "kube-api-access-jv4xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.849493 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "26624762-8a2d-4273-9f09-73895227b65c" (UID: "26624762-8a2d-4273-9f09-73895227b65c"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.856662 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-inventory" (OuterVolumeSpecName: "inventory") pod "26624762-8a2d-4273-9f09-73895227b65c" (UID: "26624762-8a2d-4273-9f09-73895227b65c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.864813 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "26624762-8a2d-4273-9f09-73895227b65c" (UID: "26624762-8a2d-4273-9f09-73895227b65c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.916710 4765 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.916737 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.916749 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv4xt\" (UniqueName: \"kubernetes.io/projected/26624762-8a2d-4273-9f09-73895227b65c-kube-api-access-jv4xt\") on node \"crc\" DevicePath \"\"" Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.916757 4765 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:47:23 crc kubenswrapper[4765]: I0121 13:47:23.916766 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26624762-8a2d-4273-9f09-73895227b65c-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.248146 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" event={"ID":"26624762-8a2d-4273-9f09-73895227b65c","Type":"ContainerDied","Data":"66697b404362306ef521b0e665cb9ff92fe75326542a0e58ef9cde4b140ccf0f"} Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.248191 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66697b404362306ef521b0e665cb9ff92fe75326542a0e58ef9cde4b140ccf0f" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.248263 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.394761 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx"] Jan 21 13:47:24 crc kubenswrapper[4765]: E0121 13:47:24.395135 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f522848-d230-4e8f-ab07-184fbb92483a" containerName="collect-profiles" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.395151 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f522848-d230-4e8f-ab07-184fbb92483a" containerName="collect-profiles" Jan 21 13:47:24 crc kubenswrapper[4765]: E0121 13:47:24.395190 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26624762-8a2d-4273-9f09-73895227b65c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.395198 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="26624762-8a2d-4273-9f09-73895227b65c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.395406 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f522848-d230-4e8f-ab07-184fbb92483a" containerName="collect-profiles" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.395423 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="26624762-8a2d-4273-9f09-73895227b65c" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.395989 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.399173 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.400574 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.400806 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.401019 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.401255 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.401353 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.402632 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.416343 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx"] Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.431553 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.431645 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.431674 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.431700 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pmfv\" (UniqueName: \"kubernetes.io/projected/13a3818b-4be7-40d0-99d2-ae84ab4caceb-kube-api-access-9pmfv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.431740 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.431772 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.431808 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.431907 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.431943 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.533577 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.534439 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.534557 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.534614 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.534722 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.534884 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.534932 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.534975 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pmfv\" (UniqueName: \"kubernetes.io/projected/13a3818b-4be7-40d0-99d2-ae84ab4caceb-kube-api-access-9pmfv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.535063 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.536064 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.537426 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.537735 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.538825 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.538949 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.539161 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.539802 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.549123 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.575457 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pmfv\" (UniqueName: \"kubernetes.io/projected/13a3818b-4be7-40d0-99d2-ae84ab4caceb-kube-api-access-9pmfv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-pntmx\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:24 crc kubenswrapper[4765]: I0121 13:47:24.713686 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:47:25 crc kubenswrapper[4765]: I0121 13:47:25.242112 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx"] Jan 21 13:47:25 crc kubenswrapper[4765]: I0121 13:47:25.244078 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:47:25 crc kubenswrapper[4765]: I0121 13:47:25.258319 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" event={"ID":"13a3818b-4be7-40d0-99d2-ae84ab4caceb","Type":"ContainerStarted","Data":"d49cc3bb65d213756f6f320a15ea5983a08a18ce5e9c3051f1cdf21abe2a41de"} Jan 21 13:47:26 crc kubenswrapper[4765]: I0121 13:47:26.266862 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" event={"ID":"13a3818b-4be7-40d0-99d2-ae84ab4caceb","Type":"ContainerStarted","Data":"bde977e688c0036311f9ff673f772653b88d47e0808d365cac24624b4152b880"} Jan 21 13:47:26 crc kubenswrapper[4765]: I0121 13:47:26.294356 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" podStartSLOduration=1.782436275 podStartE2EDuration="2.294341021s" podCreationTimestamp="2026-01-21 13:47:24 +0000 UTC" firstStartedPulling="2026-01-21 13:47:25.243898567 +0000 UTC m=+2706.261624389" lastFinishedPulling="2026-01-21 13:47:25.755803293 +0000 UTC m=+2706.773529135" observedRunningTime="2026-01-21 13:47:26.287964393 +0000 UTC m=+2707.305690215" watchObservedRunningTime="2026-01-21 13:47:26.294341021 +0000 UTC m=+2707.312066843" Jan 21 13:48:31 crc kubenswrapper[4765]: I0121 13:48:31.927622 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wkt6f"] Jan 21 13:48:31 crc kubenswrapper[4765]: I0121 13:48:31.931121 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:31 crc kubenswrapper[4765]: I0121 13:48:31.938756 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkt6f"] Jan 21 13:48:32 crc kubenswrapper[4765]: I0121 13:48:32.085733 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6954a0fd-b941-40ad-9f9c-258b83189b1a-catalog-content\") pod \"redhat-operators-wkt6f\" (UID: \"6954a0fd-b941-40ad-9f9c-258b83189b1a\") " pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:32 crc kubenswrapper[4765]: I0121 13:48:32.086046 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89f9z\" (UniqueName: \"kubernetes.io/projected/6954a0fd-b941-40ad-9f9c-258b83189b1a-kube-api-access-89f9z\") pod \"redhat-operators-wkt6f\" (UID: \"6954a0fd-b941-40ad-9f9c-258b83189b1a\") " pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:32 crc kubenswrapper[4765]: I0121 13:48:32.086352 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6954a0fd-b941-40ad-9f9c-258b83189b1a-utilities\") pod \"redhat-operators-wkt6f\" (UID: \"6954a0fd-b941-40ad-9f9c-258b83189b1a\") " pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:32 crc kubenswrapper[4765]: I0121 13:48:32.189102 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6954a0fd-b941-40ad-9f9c-258b83189b1a-catalog-content\") pod \"redhat-operators-wkt6f\" (UID: \"6954a0fd-b941-40ad-9f9c-258b83189b1a\") " pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:32 crc kubenswrapper[4765]: I0121 13:48:32.189492 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89f9z\" (UniqueName: \"kubernetes.io/projected/6954a0fd-b941-40ad-9f9c-258b83189b1a-kube-api-access-89f9z\") pod \"redhat-operators-wkt6f\" (UID: \"6954a0fd-b941-40ad-9f9c-258b83189b1a\") " pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:32 crc kubenswrapper[4765]: I0121 13:48:32.189689 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6954a0fd-b941-40ad-9f9c-258b83189b1a-utilities\") pod \"redhat-operators-wkt6f\" (UID: \"6954a0fd-b941-40ad-9f9c-258b83189b1a\") " pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:32 crc kubenswrapper[4765]: I0121 13:48:32.190070 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6954a0fd-b941-40ad-9f9c-258b83189b1a-catalog-content\") pod \"redhat-operators-wkt6f\" (UID: \"6954a0fd-b941-40ad-9f9c-258b83189b1a\") " pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:32 crc kubenswrapper[4765]: I0121 13:48:32.190142 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6954a0fd-b941-40ad-9f9c-258b83189b1a-utilities\") pod \"redhat-operators-wkt6f\" (UID: \"6954a0fd-b941-40ad-9f9c-258b83189b1a\") " pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:32 crc kubenswrapper[4765]: I0121 13:48:32.209839 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89f9z\" (UniqueName: \"kubernetes.io/projected/6954a0fd-b941-40ad-9f9c-258b83189b1a-kube-api-access-89f9z\") pod \"redhat-operators-wkt6f\" (UID: \"6954a0fd-b941-40ad-9f9c-258b83189b1a\") " pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:32 crc kubenswrapper[4765]: I0121 13:48:32.283446 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:32 crc kubenswrapper[4765]: I0121 13:48:32.816415 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkt6f"] Jan 21 13:48:32 crc kubenswrapper[4765]: I0121 13:48:32.897786 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkt6f" event={"ID":"6954a0fd-b941-40ad-9f9c-258b83189b1a","Type":"ContainerStarted","Data":"bdc088be6b6f53db9d0d00d732dfe5451a7f1dac528bad1a6b73761451a1ff82"} Jan 21 13:48:33 crc kubenswrapper[4765]: I0121 13:48:33.914791 4765 generic.go:334] "Generic (PLEG): container finished" podID="6954a0fd-b941-40ad-9f9c-258b83189b1a" containerID="13b535f3881716387b3409d028ea00ea3b778d40220f8506b77a8943c693adda" exitCode=0 Jan 21 13:48:33 crc kubenswrapper[4765]: I0121 13:48:33.914912 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkt6f" event={"ID":"6954a0fd-b941-40ad-9f9c-258b83189b1a","Type":"ContainerDied","Data":"13b535f3881716387b3409d028ea00ea3b778d40220f8506b77a8943c693adda"} Jan 21 13:48:35 crc kubenswrapper[4765]: I0121 13:48:35.941778 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkt6f" event={"ID":"6954a0fd-b941-40ad-9f9c-258b83189b1a","Type":"ContainerStarted","Data":"28233df2da0b85373a860d68dcb5de85078de5f7aa2f89a3982653b7aad57490"} Jan 21 13:48:40 crc kubenswrapper[4765]: I0121 13:48:40.987174 4765 generic.go:334] "Generic (PLEG): container finished" podID="6954a0fd-b941-40ad-9f9c-258b83189b1a" containerID="28233df2da0b85373a860d68dcb5de85078de5f7aa2f89a3982653b7aad57490" exitCode=0 Jan 21 13:48:40 crc kubenswrapper[4765]: I0121 13:48:40.987300 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkt6f" event={"ID":"6954a0fd-b941-40ad-9f9c-258b83189b1a","Type":"ContainerDied","Data":"28233df2da0b85373a860d68dcb5de85078de5f7aa2f89a3982653b7aad57490"} Jan 21 13:48:42 crc kubenswrapper[4765]: I0121 13:48:42.009942 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkt6f" event={"ID":"6954a0fd-b941-40ad-9f9c-258b83189b1a","Type":"ContainerStarted","Data":"9f6a1eb7e81a435e33a4566eba9566790c7ad65266aea1a13099cbb0ea1db344"} Jan 21 13:48:42 crc kubenswrapper[4765]: I0121 13:48:42.034814 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wkt6f" podStartSLOduration=3.335376343 podStartE2EDuration="11.034795224s" podCreationTimestamp="2026-01-21 13:48:31 +0000 UTC" firstStartedPulling="2026-01-21 13:48:33.917757161 +0000 UTC m=+2774.935482983" lastFinishedPulling="2026-01-21 13:48:41.617176032 +0000 UTC m=+2782.634901864" observedRunningTime="2026-01-21 13:48:42.028030336 +0000 UTC m=+2783.045756168" watchObservedRunningTime="2026-01-21 13:48:42.034795224 +0000 UTC m=+2783.052521036" Jan 21 13:48:42 crc kubenswrapper[4765]: I0121 13:48:42.284562 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:42 crc kubenswrapper[4765]: I0121 13:48:42.284817 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:43 crc kubenswrapper[4765]: I0121 13:48:43.332603 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wkt6f" podUID="6954a0fd-b941-40ad-9f9c-258b83189b1a" containerName="registry-server" probeResult="failure" output=< Jan 21 13:48:43 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Jan 21 13:48:43 crc kubenswrapper[4765]: > Jan 21 13:48:44 crc kubenswrapper[4765]: I0121 13:48:44.446447 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:48:44 crc kubenswrapper[4765]: I0121 13:48:44.447443 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:48:52 crc kubenswrapper[4765]: I0121 13:48:52.358074 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:52 crc kubenswrapper[4765]: I0121 13:48:52.437734 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:52 crc kubenswrapper[4765]: I0121 13:48:52.613655 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wkt6f"] Jan 21 13:48:54 crc kubenswrapper[4765]: I0121 13:48:54.117698 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wkt6f" podUID="6954a0fd-b941-40ad-9f9c-258b83189b1a" containerName="registry-server" containerID="cri-o://9f6a1eb7e81a435e33a4566eba9566790c7ad65266aea1a13099cbb0ea1db344" gracePeriod=2 Jan 21 13:48:54 crc kubenswrapper[4765]: I0121 13:48:54.664161 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:54 crc kubenswrapper[4765]: I0121 13:48:54.818886 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6954a0fd-b941-40ad-9f9c-258b83189b1a-catalog-content\") pod \"6954a0fd-b941-40ad-9f9c-258b83189b1a\" (UID: \"6954a0fd-b941-40ad-9f9c-258b83189b1a\") " Jan 21 13:48:54 crc kubenswrapper[4765]: I0121 13:48:54.819023 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89f9z\" (UniqueName: \"kubernetes.io/projected/6954a0fd-b941-40ad-9f9c-258b83189b1a-kube-api-access-89f9z\") pod \"6954a0fd-b941-40ad-9f9c-258b83189b1a\" (UID: \"6954a0fd-b941-40ad-9f9c-258b83189b1a\") " Jan 21 13:48:54 crc kubenswrapper[4765]: I0121 13:48:54.819124 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6954a0fd-b941-40ad-9f9c-258b83189b1a-utilities\") pod \"6954a0fd-b941-40ad-9f9c-258b83189b1a\" (UID: \"6954a0fd-b941-40ad-9f9c-258b83189b1a\") " Jan 21 13:48:54 crc kubenswrapper[4765]: I0121 13:48:54.819986 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6954a0fd-b941-40ad-9f9c-258b83189b1a-utilities" (OuterVolumeSpecName: "utilities") pod "6954a0fd-b941-40ad-9f9c-258b83189b1a" (UID: "6954a0fd-b941-40ad-9f9c-258b83189b1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:48:54 crc kubenswrapper[4765]: I0121 13:48:54.829861 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6954a0fd-b941-40ad-9f9c-258b83189b1a-kube-api-access-89f9z" (OuterVolumeSpecName: "kube-api-access-89f9z") pod "6954a0fd-b941-40ad-9f9c-258b83189b1a" (UID: "6954a0fd-b941-40ad-9f9c-258b83189b1a"). InnerVolumeSpecName "kube-api-access-89f9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:48:54 crc kubenswrapper[4765]: I0121 13:48:54.921867 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89f9z\" (UniqueName: \"kubernetes.io/projected/6954a0fd-b941-40ad-9f9c-258b83189b1a-kube-api-access-89f9z\") on node \"crc\" DevicePath \"\"" Jan 21 13:48:54 crc kubenswrapper[4765]: I0121 13:48:54.921904 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6954a0fd-b941-40ad-9f9c-258b83189b1a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:48:54 crc kubenswrapper[4765]: I0121 13:48:54.959751 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6954a0fd-b941-40ad-9f9c-258b83189b1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6954a0fd-b941-40ad-9f9c-258b83189b1a" (UID: "6954a0fd-b941-40ad-9f9c-258b83189b1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.024449 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6954a0fd-b941-40ad-9f9c-258b83189b1a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.131518 4765 generic.go:334] "Generic (PLEG): container finished" podID="6954a0fd-b941-40ad-9f9c-258b83189b1a" containerID="9f6a1eb7e81a435e33a4566eba9566790c7ad65266aea1a13099cbb0ea1db344" exitCode=0 Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.132528 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkt6f" event={"ID":"6954a0fd-b941-40ad-9f9c-258b83189b1a","Type":"ContainerDied","Data":"9f6a1eb7e81a435e33a4566eba9566790c7ad65266aea1a13099cbb0ea1db344"} Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.132958 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkt6f" event={"ID":"6954a0fd-b941-40ad-9f9c-258b83189b1a","Type":"ContainerDied","Data":"bdc088be6b6f53db9d0d00d732dfe5451a7f1dac528bad1a6b73761451a1ff82"} Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.133039 4765 scope.go:117] "RemoveContainer" containerID="9f6a1eb7e81a435e33a4566eba9566790c7ad65266aea1a13099cbb0ea1db344" Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.132586 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkt6f" Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.164071 4765 scope.go:117] "RemoveContainer" containerID="28233df2da0b85373a860d68dcb5de85078de5f7aa2f89a3982653b7aad57490" Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.176161 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wkt6f"] Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.190402 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wkt6f"] Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.238597 4765 scope.go:117] "RemoveContainer" containerID="13b535f3881716387b3409d028ea00ea3b778d40220f8506b77a8943c693adda" Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.272022 4765 scope.go:117] "RemoveContainer" containerID="9f6a1eb7e81a435e33a4566eba9566790c7ad65266aea1a13099cbb0ea1db344" Jan 21 13:48:55 crc kubenswrapper[4765]: E0121 13:48:55.272678 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f6a1eb7e81a435e33a4566eba9566790c7ad65266aea1a13099cbb0ea1db344\": container with ID starting with 9f6a1eb7e81a435e33a4566eba9566790c7ad65266aea1a13099cbb0ea1db344 not found: ID does not exist" containerID="9f6a1eb7e81a435e33a4566eba9566790c7ad65266aea1a13099cbb0ea1db344" Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.272715 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f6a1eb7e81a435e33a4566eba9566790c7ad65266aea1a13099cbb0ea1db344"} err="failed to get container status \"9f6a1eb7e81a435e33a4566eba9566790c7ad65266aea1a13099cbb0ea1db344\": rpc error: code = NotFound desc = could not find container \"9f6a1eb7e81a435e33a4566eba9566790c7ad65266aea1a13099cbb0ea1db344\": container with ID starting with 9f6a1eb7e81a435e33a4566eba9566790c7ad65266aea1a13099cbb0ea1db344 not found: ID does not exist" Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.272758 4765 scope.go:117] "RemoveContainer" containerID="28233df2da0b85373a860d68dcb5de85078de5f7aa2f89a3982653b7aad57490" Jan 21 13:48:55 crc kubenswrapper[4765]: E0121 13:48:55.273010 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28233df2da0b85373a860d68dcb5de85078de5f7aa2f89a3982653b7aad57490\": container with ID starting with 28233df2da0b85373a860d68dcb5de85078de5f7aa2f89a3982653b7aad57490 not found: ID does not exist" containerID="28233df2da0b85373a860d68dcb5de85078de5f7aa2f89a3982653b7aad57490" Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.273031 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28233df2da0b85373a860d68dcb5de85078de5f7aa2f89a3982653b7aad57490"} err="failed to get container status \"28233df2da0b85373a860d68dcb5de85078de5f7aa2f89a3982653b7aad57490\": rpc error: code = NotFound desc = could not find container \"28233df2da0b85373a860d68dcb5de85078de5f7aa2f89a3982653b7aad57490\": container with ID starting with 28233df2da0b85373a860d68dcb5de85078de5f7aa2f89a3982653b7aad57490 not found: ID does not exist" Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.273045 4765 scope.go:117] "RemoveContainer" containerID="13b535f3881716387b3409d028ea00ea3b778d40220f8506b77a8943c693adda" Jan 21 13:48:55 crc kubenswrapper[4765]: E0121 13:48:55.273365 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13b535f3881716387b3409d028ea00ea3b778d40220f8506b77a8943c693adda\": container with ID starting with 13b535f3881716387b3409d028ea00ea3b778d40220f8506b77a8943c693adda not found: ID does not exist" containerID="13b535f3881716387b3409d028ea00ea3b778d40220f8506b77a8943c693adda" Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.273387 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b535f3881716387b3409d028ea00ea3b778d40220f8506b77a8943c693adda"} err="failed to get container status \"13b535f3881716387b3409d028ea00ea3b778d40220f8506b77a8943c693adda\": rpc error: code = NotFound desc = could not find container \"13b535f3881716387b3409d028ea00ea3b778d40220f8506b77a8943c693adda\": container with ID starting with 13b535f3881716387b3409d028ea00ea3b778d40220f8506b77a8943c693adda not found: ID does not exist" Jan 21 13:48:55 crc kubenswrapper[4765]: I0121 13:48:55.635716 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6954a0fd-b941-40ad-9f9c-258b83189b1a" path="/var/lib/kubelet/pods/6954a0fd-b941-40ad-9f9c-258b83189b1a/volumes" Jan 21 13:49:14 crc kubenswrapper[4765]: I0121 13:49:14.446195 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:49:14 crc kubenswrapper[4765]: I0121 13:49:14.446740 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:49:44 crc kubenswrapper[4765]: I0121 13:49:44.445940 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:49:44 crc kubenswrapper[4765]: I0121 13:49:44.446725 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:49:44 crc kubenswrapper[4765]: I0121 13:49:44.446772 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:49:44 crc kubenswrapper[4765]: I0121 13:49:44.447541 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"289eeaf139eef3057016b49afbca88f96cc90417dca0a155ef85620d4bfd08bb"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:49:44 crc kubenswrapper[4765]: I0121 13:49:44.447597 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://289eeaf139eef3057016b49afbca88f96cc90417dca0a155ef85620d4bfd08bb" gracePeriod=600 Jan 21 13:49:44 crc kubenswrapper[4765]: I0121 13:49:44.617369 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="289eeaf139eef3057016b49afbca88f96cc90417dca0a155ef85620d4bfd08bb" exitCode=0 Jan 21 13:49:44 crc kubenswrapper[4765]: I0121 13:49:44.617423 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"289eeaf139eef3057016b49afbca88f96cc90417dca0a155ef85620d4bfd08bb"} Jan 21 13:49:44 crc kubenswrapper[4765]: I0121 13:49:44.617455 4765 scope.go:117] "RemoveContainer" containerID="d03ea682c33560f5274fe2b8fa361387fe65e74649c52021963ee8a0f243d4c0" Jan 21 13:49:45 crc kubenswrapper[4765]: I0121 13:49:45.628470 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217"} Jan 21 13:49:48 crc kubenswrapper[4765]: I0121 13:49:48.661345 4765 generic.go:334] "Generic (PLEG): container finished" podID="13a3818b-4be7-40d0-99d2-ae84ab4caceb" containerID="bde977e688c0036311f9ff673f772653b88d47e0808d365cac24624b4152b880" exitCode=0 Jan 21 13:49:48 crc kubenswrapper[4765]: I0121 13:49:48.661529 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" event={"ID":"13a3818b-4be7-40d0-99d2-ae84ab4caceb","Type":"ContainerDied","Data":"bde977e688c0036311f9ff673f772653b88d47e0808d365cac24624b4152b880"} Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.192862 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.243162 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-cell1-compute-config-0\") pod \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.286756 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "13a3818b-4be7-40d0-99d2-ae84ab4caceb" (UID: "13a3818b-4be7-40d0-99d2-ae84ab4caceb"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.345504 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-extra-config-0\") pod \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.345922 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pmfv\" (UniqueName: \"kubernetes.io/projected/13a3818b-4be7-40d0-99d2-ae84ab4caceb-kube-api-access-9pmfv\") pod \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.346415 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-migration-ssh-key-1\") pod \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.346467 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-inventory\") pod \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.346563 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-cell1-compute-config-1\") pod \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.346625 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-migration-ssh-key-0\") pod \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.346664 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-combined-ca-bundle\") pod \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.346799 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-ssh-key-openstack-edpm-ipam\") pod \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\" (UID: \"13a3818b-4be7-40d0-99d2-ae84ab4caceb\") " Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.347496 4765 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.351494 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a3818b-4be7-40d0-99d2-ae84ab4caceb-kube-api-access-9pmfv" (OuterVolumeSpecName: "kube-api-access-9pmfv") pod "13a3818b-4be7-40d0-99d2-ae84ab4caceb" (UID: "13a3818b-4be7-40d0-99d2-ae84ab4caceb"). InnerVolumeSpecName "kube-api-access-9pmfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.353319 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "13a3818b-4be7-40d0-99d2-ae84ab4caceb" (UID: "13a3818b-4be7-40d0-99d2-ae84ab4caceb"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.378164 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "13a3818b-4be7-40d0-99d2-ae84ab4caceb" (UID: "13a3818b-4be7-40d0-99d2-ae84ab4caceb"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.386055 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "13a3818b-4be7-40d0-99d2-ae84ab4caceb" (UID: "13a3818b-4be7-40d0-99d2-ae84ab4caceb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.388764 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "13a3818b-4be7-40d0-99d2-ae84ab4caceb" (UID: "13a3818b-4be7-40d0-99d2-ae84ab4caceb"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.388903 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-inventory" (OuterVolumeSpecName: "inventory") pod "13a3818b-4be7-40d0-99d2-ae84ab4caceb" (UID: "13a3818b-4be7-40d0-99d2-ae84ab4caceb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.390828 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "13a3818b-4be7-40d0-99d2-ae84ab4caceb" (UID: "13a3818b-4be7-40d0-99d2-ae84ab4caceb"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.400167 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "13a3818b-4be7-40d0-99d2-ae84ab4caceb" (UID: "13a3818b-4be7-40d0-99d2-ae84ab4caceb"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.449151 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.449194 4765 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.449222 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pmfv\" (UniqueName: \"kubernetes.io/projected/13a3818b-4be7-40d0-99d2-ae84ab4caceb-kube-api-access-9pmfv\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.449238 4765 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.449252 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.449265 4765 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.449278 4765 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.449289 4765 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a3818b-4be7-40d0-99d2-ae84ab4caceb-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.678768 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" event={"ID":"13a3818b-4be7-40d0-99d2-ae84ab4caceb","Type":"ContainerDied","Data":"d49cc3bb65d213756f6f320a15ea5983a08a18ce5e9c3051f1cdf21abe2a41de"} Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.678816 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d49cc3bb65d213756f6f320a15ea5983a08a18ce5e9c3051f1cdf21abe2a41de" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.678835 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-pntmx" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.804141 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb"] Jan 21 13:49:50 crc kubenswrapper[4765]: E0121 13:49:50.805170 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6954a0fd-b941-40ad-9f9c-258b83189b1a" containerName="extract-content" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.805192 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="6954a0fd-b941-40ad-9f9c-258b83189b1a" containerName="extract-content" Jan 21 13:49:50 crc kubenswrapper[4765]: E0121 13:49:50.805220 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a3818b-4be7-40d0-99d2-ae84ab4caceb" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.805227 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a3818b-4be7-40d0-99d2-ae84ab4caceb" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 13:49:50 crc kubenswrapper[4765]: E0121 13:49:50.805245 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6954a0fd-b941-40ad-9f9c-258b83189b1a" containerName="registry-server" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.805251 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="6954a0fd-b941-40ad-9f9c-258b83189b1a" containerName="registry-server" Jan 21 13:49:50 crc kubenswrapper[4765]: E0121 13:49:50.805275 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6954a0fd-b941-40ad-9f9c-258b83189b1a" containerName="extract-utilities" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.805282 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="6954a0fd-b941-40ad-9f9c-258b83189b1a" containerName="extract-utilities" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.805444 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a3818b-4be7-40d0-99d2-ae84ab4caceb" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.805457 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="6954a0fd-b941-40ad-9f9c-258b83189b1a" containerName="registry-server" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.806073 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.810793 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-x88cm" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.812274 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.812367 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.812419 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.812480 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.818523 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb"] Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.855519 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.855567 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.855656 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.855680 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.855706 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.855750 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.855839 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl582\" (UniqueName: \"kubernetes.io/projected/72b52054-c641-4cfb-9e83-f5b6794f77de-kube-api-access-tl582\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.957083 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.957126 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.957170 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.957191 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.957236 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.957278 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.957343 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl582\" (UniqueName: \"kubernetes.io/projected/72b52054-c641-4cfb-9e83-f5b6794f77de-kube-api-access-tl582\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.963447 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.963580 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.963907 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.964281 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.964822 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.964850 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:50 crc kubenswrapper[4765]: I0121 13:49:50.974889 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl582\" (UniqueName: \"kubernetes.io/projected/72b52054-c641-4cfb-9e83-f5b6794f77de-kube-api-access-tl582\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:51 crc kubenswrapper[4765]: I0121 13:49:51.126622 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:49:51 crc kubenswrapper[4765]: I0121 13:49:51.665739 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb"] Jan 21 13:49:51 crc kubenswrapper[4765]: I0121 13:49:51.692450 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" event={"ID":"72b52054-c641-4cfb-9e83-f5b6794f77de","Type":"ContainerStarted","Data":"6217153074e42459649180a7bde1f22ec95794b07dbf775089a641f8a3eedd3b"} Jan 21 13:49:52 crc kubenswrapper[4765]: I0121 13:49:52.702433 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" event={"ID":"72b52054-c641-4cfb-9e83-f5b6794f77de","Type":"ContainerStarted","Data":"f4a5177d560ce63dd1f1bb6df1ae5c1efed0183c78bd856ad86e674c7cd99d09"} Jan 21 13:49:52 crc kubenswrapper[4765]: I0121 13:49:52.724286 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" podStartSLOduration=2.2289308549999998 podStartE2EDuration="2.724268441s" podCreationTimestamp="2026-01-21 13:49:50 +0000 UTC" firstStartedPulling="2026-01-21 13:49:51.675785681 +0000 UTC m=+2852.693511513" lastFinishedPulling="2026-01-21 13:49:52.171123277 +0000 UTC m=+2853.188849099" observedRunningTime="2026-01-21 13:49:52.720906887 +0000 UTC m=+2853.738632709" watchObservedRunningTime="2026-01-21 13:49:52.724268441 +0000 UTC m=+2853.741994263" Jan 21 13:50:44 crc kubenswrapper[4765]: I0121 13:50:44.399973 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r6z5l"] Jan 21 13:50:44 crc kubenswrapper[4765]: I0121 13:50:44.402361 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:44 crc kubenswrapper[4765]: I0121 13:50:44.416574 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6z5l"] Jan 21 13:50:44 crc kubenswrapper[4765]: I0121 13:50:44.461772 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8whqs\" (UniqueName: \"kubernetes.io/projected/8e9b2e64-565f-4eee-8074-0fe6306728ea-kube-api-access-8whqs\") pod \"redhat-marketplace-r6z5l\" (UID: \"8e9b2e64-565f-4eee-8074-0fe6306728ea\") " pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:44 crc kubenswrapper[4765]: I0121 13:50:44.461856 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9b2e64-565f-4eee-8074-0fe6306728ea-utilities\") pod \"redhat-marketplace-r6z5l\" (UID: \"8e9b2e64-565f-4eee-8074-0fe6306728ea\") " pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:44 crc kubenswrapper[4765]: I0121 13:50:44.462029 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9b2e64-565f-4eee-8074-0fe6306728ea-catalog-content\") pod \"redhat-marketplace-r6z5l\" (UID: \"8e9b2e64-565f-4eee-8074-0fe6306728ea\") " pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:44 crc kubenswrapper[4765]: I0121 13:50:44.564028 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9b2e64-565f-4eee-8074-0fe6306728ea-catalog-content\") pod \"redhat-marketplace-r6z5l\" (UID: \"8e9b2e64-565f-4eee-8074-0fe6306728ea\") " pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:44 crc kubenswrapper[4765]: I0121 13:50:44.564108 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8whqs\" (UniqueName: \"kubernetes.io/projected/8e9b2e64-565f-4eee-8074-0fe6306728ea-kube-api-access-8whqs\") pod \"redhat-marketplace-r6z5l\" (UID: \"8e9b2e64-565f-4eee-8074-0fe6306728ea\") " pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:44 crc kubenswrapper[4765]: I0121 13:50:44.564159 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9b2e64-565f-4eee-8074-0fe6306728ea-utilities\") pod \"redhat-marketplace-r6z5l\" (UID: \"8e9b2e64-565f-4eee-8074-0fe6306728ea\") " pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:44 crc kubenswrapper[4765]: I0121 13:50:44.564707 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9b2e64-565f-4eee-8074-0fe6306728ea-catalog-content\") pod \"redhat-marketplace-r6z5l\" (UID: \"8e9b2e64-565f-4eee-8074-0fe6306728ea\") " pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:44 crc kubenswrapper[4765]: I0121 13:50:44.564754 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9b2e64-565f-4eee-8074-0fe6306728ea-utilities\") pod \"redhat-marketplace-r6z5l\" (UID: \"8e9b2e64-565f-4eee-8074-0fe6306728ea\") " pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:44 crc kubenswrapper[4765]: I0121 13:50:44.593577 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8whqs\" (UniqueName: \"kubernetes.io/projected/8e9b2e64-565f-4eee-8074-0fe6306728ea-kube-api-access-8whqs\") pod \"redhat-marketplace-r6z5l\" (UID: \"8e9b2e64-565f-4eee-8074-0fe6306728ea\") " pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:44 crc kubenswrapper[4765]: I0121 13:50:44.742610 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:45 crc kubenswrapper[4765]: I0121 13:50:45.253827 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6z5l"] Jan 21 13:50:46 crc kubenswrapper[4765]: I0121 13:50:46.192474 4765 generic.go:334] "Generic (PLEG): container finished" podID="8e9b2e64-565f-4eee-8074-0fe6306728ea" containerID="c7bc618da6754784c597a18b80c331f1df14b033e694d1b132ab287ed1e707b3" exitCode=0 Jan 21 13:50:46 crc kubenswrapper[4765]: I0121 13:50:46.192531 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6z5l" event={"ID":"8e9b2e64-565f-4eee-8074-0fe6306728ea","Type":"ContainerDied","Data":"c7bc618da6754784c597a18b80c331f1df14b033e694d1b132ab287ed1e707b3"} Jan 21 13:50:46 crc kubenswrapper[4765]: I0121 13:50:46.192748 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6z5l" event={"ID":"8e9b2e64-565f-4eee-8074-0fe6306728ea","Type":"ContainerStarted","Data":"afe07ea8d064e42a09d80640153c0ca3e28b6001421a8f507c5a20d67a352094"} Jan 21 13:50:47 crc kubenswrapper[4765]: I0121 13:50:47.202505 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6z5l" event={"ID":"8e9b2e64-565f-4eee-8074-0fe6306728ea","Type":"ContainerStarted","Data":"936e6b73dadefb603e118f3746e3312e5d0e620452e19fdd5d4166d7c2f99707"} Jan 21 13:50:48 crc kubenswrapper[4765]: I0121 13:50:48.212977 4765 generic.go:334] "Generic (PLEG): container finished" podID="8e9b2e64-565f-4eee-8074-0fe6306728ea" containerID="936e6b73dadefb603e118f3746e3312e5d0e620452e19fdd5d4166d7c2f99707" exitCode=0 Jan 21 13:50:48 crc kubenswrapper[4765]: I0121 13:50:48.213047 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6z5l" event={"ID":"8e9b2e64-565f-4eee-8074-0fe6306728ea","Type":"ContainerDied","Data":"936e6b73dadefb603e118f3746e3312e5d0e620452e19fdd5d4166d7c2f99707"} Jan 21 13:50:49 crc kubenswrapper[4765]: I0121 13:50:49.226091 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6z5l" event={"ID":"8e9b2e64-565f-4eee-8074-0fe6306728ea","Type":"ContainerStarted","Data":"695d145081a538417a1434b75fe1b89eeaaea47e028607b52fb9644e24f66706"} Jan 21 13:50:49 crc kubenswrapper[4765]: I0121 13:50:49.252416 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r6z5l" podStartSLOduration=2.7601395159999997 podStartE2EDuration="5.252399568s" podCreationTimestamp="2026-01-21 13:50:44 +0000 UTC" firstStartedPulling="2026-01-21 13:50:46.19456983 +0000 UTC m=+2907.212295652" lastFinishedPulling="2026-01-21 13:50:48.686829862 +0000 UTC m=+2909.704555704" observedRunningTime="2026-01-21 13:50:49.248084523 +0000 UTC m=+2910.265810355" watchObservedRunningTime="2026-01-21 13:50:49.252399568 +0000 UTC m=+2910.270125390" Jan 21 13:50:54 crc kubenswrapper[4765]: I0121 13:50:54.743774 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:54 crc kubenswrapper[4765]: I0121 13:50:54.744655 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:54 crc kubenswrapper[4765]: I0121 13:50:54.802021 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:55 crc kubenswrapper[4765]: I0121 13:50:55.328784 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:55 crc kubenswrapper[4765]: I0121 13:50:55.396561 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6z5l"] Jan 21 13:50:57 crc kubenswrapper[4765]: I0121 13:50:57.309648 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r6z5l" podUID="8e9b2e64-565f-4eee-8074-0fe6306728ea" containerName="registry-server" containerID="cri-o://695d145081a538417a1434b75fe1b89eeaaea47e028607b52fb9644e24f66706" gracePeriod=2 Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.301640 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.389947 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8whqs\" (UniqueName: \"kubernetes.io/projected/8e9b2e64-565f-4eee-8074-0fe6306728ea-kube-api-access-8whqs\") pod \"8e9b2e64-565f-4eee-8074-0fe6306728ea\" (UID: \"8e9b2e64-565f-4eee-8074-0fe6306728ea\") " Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.390029 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9b2e64-565f-4eee-8074-0fe6306728ea-utilities\") pod \"8e9b2e64-565f-4eee-8074-0fe6306728ea\" (UID: \"8e9b2e64-565f-4eee-8074-0fe6306728ea\") " Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.390082 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9b2e64-565f-4eee-8074-0fe6306728ea-catalog-content\") pod \"8e9b2e64-565f-4eee-8074-0fe6306728ea\" (UID: \"8e9b2e64-565f-4eee-8074-0fe6306728ea\") " Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.391921 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9b2e64-565f-4eee-8074-0fe6306728ea-utilities" (OuterVolumeSpecName: "utilities") pod "8e9b2e64-565f-4eee-8074-0fe6306728ea" (UID: "8e9b2e64-565f-4eee-8074-0fe6306728ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.396354 4765 generic.go:334] "Generic (PLEG): container finished" podID="8e9b2e64-565f-4eee-8074-0fe6306728ea" containerID="695d145081a538417a1434b75fe1b89eeaaea47e028607b52fb9644e24f66706" exitCode=0 Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.396397 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6z5l" event={"ID":"8e9b2e64-565f-4eee-8074-0fe6306728ea","Type":"ContainerDied","Data":"695d145081a538417a1434b75fe1b89eeaaea47e028607b52fb9644e24f66706"} Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.396479 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6z5l" event={"ID":"8e9b2e64-565f-4eee-8074-0fe6306728ea","Type":"ContainerDied","Data":"afe07ea8d064e42a09d80640153c0ca3e28b6001421a8f507c5a20d67a352094"} Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.396501 4765 scope.go:117] "RemoveContainer" containerID="695d145081a538417a1434b75fe1b89eeaaea47e028607b52fb9644e24f66706" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.396591 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6z5l" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.417646 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9b2e64-565f-4eee-8074-0fe6306728ea-kube-api-access-8whqs" (OuterVolumeSpecName: "kube-api-access-8whqs") pod "8e9b2e64-565f-4eee-8074-0fe6306728ea" (UID: "8e9b2e64-565f-4eee-8074-0fe6306728ea"). InnerVolumeSpecName "kube-api-access-8whqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.424013 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e9b2e64-565f-4eee-8074-0fe6306728ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e9b2e64-565f-4eee-8074-0fe6306728ea" (UID: "8e9b2e64-565f-4eee-8074-0fe6306728ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.463483 4765 scope.go:117] "RemoveContainer" containerID="936e6b73dadefb603e118f3746e3312e5d0e620452e19fdd5d4166d7c2f99707" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.483016 4765 scope.go:117] "RemoveContainer" containerID="c7bc618da6754784c597a18b80c331f1df14b033e694d1b132ab287ed1e707b3" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.491746 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8whqs\" (UniqueName: \"kubernetes.io/projected/8e9b2e64-565f-4eee-8074-0fe6306728ea-kube-api-access-8whqs\") on node \"crc\" DevicePath \"\"" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.491788 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e9b2e64-565f-4eee-8074-0fe6306728ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.491806 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e9b2e64-565f-4eee-8074-0fe6306728ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.531454 4765 scope.go:117] "RemoveContainer" containerID="695d145081a538417a1434b75fe1b89eeaaea47e028607b52fb9644e24f66706" Jan 21 13:50:58 crc kubenswrapper[4765]: E0121 13:50:58.531950 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"695d145081a538417a1434b75fe1b89eeaaea47e028607b52fb9644e24f66706\": container with ID starting with 695d145081a538417a1434b75fe1b89eeaaea47e028607b52fb9644e24f66706 not found: ID does not exist" containerID="695d145081a538417a1434b75fe1b89eeaaea47e028607b52fb9644e24f66706" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.531983 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"695d145081a538417a1434b75fe1b89eeaaea47e028607b52fb9644e24f66706"} err="failed to get container status \"695d145081a538417a1434b75fe1b89eeaaea47e028607b52fb9644e24f66706\": rpc error: code = NotFound desc = could not find container \"695d145081a538417a1434b75fe1b89eeaaea47e028607b52fb9644e24f66706\": container with ID starting with 695d145081a538417a1434b75fe1b89eeaaea47e028607b52fb9644e24f66706 not found: ID does not exist" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.532004 4765 scope.go:117] "RemoveContainer" containerID="936e6b73dadefb603e118f3746e3312e5d0e620452e19fdd5d4166d7c2f99707" Jan 21 13:50:58 crc kubenswrapper[4765]: E0121 13:50:58.532557 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"936e6b73dadefb603e118f3746e3312e5d0e620452e19fdd5d4166d7c2f99707\": container with ID starting with 936e6b73dadefb603e118f3746e3312e5d0e620452e19fdd5d4166d7c2f99707 not found: ID does not exist" containerID="936e6b73dadefb603e118f3746e3312e5d0e620452e19fdd5d4166d7c2f99707" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.532592 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"936e6b73dadefb603e118f3746e3312e5d0e620452e19fdd5d4166d7c2f99707"} err="failed to get container status \"936e6b73dadefb603e118f3746e3312e5d0e620452e19fdd5d4166d7c2f99707\": rpc error: code = NotFound desc = could not find container \"936e6b73dadefb603e118f3746e3312e5d0e620452e19fdd5d4166d7c2f99707\": container with ID starting with 936e6b73dadefb603e118f3746e3312e5d0e620452e19fdd5d4166d7c2f99707 not found: ID does not exist" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.532614 4765 scope.go:117] "RemoveContainer" containerID="c7bc618da6754784c597a18b80c331f1df14b033e694d1b132ab287ed1e707b3" Jan 21 13:50:58 crc kubenswrapper[4765]: E0121 13:50:58.532913 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7bc618da6754784c597a18b80c331f1df14b033e694d1b132ab287ed1e707b3\": container with ID starting with c7bc618da6754784c597a18b80c331f1df14b033e694d1b132ab287ed1e707b3 not found: ID does not exist" containerID="c7bc618da6754784c597a18b80c331f1df14b033e694d1b132ab287ed1e707b3" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.532945 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7bc618da6754784c597a18b80c331f1df14b033e694d1b132ab287ed1e707b3"} err="failed to get container status \"c7bc618da6754784c597a18b80c331f1df14b033e694d1b132ab287ed1e707b3\": rpc error: code = NotFound desc = could not find container \"c7bc618da6754784c597a18b80c331f1df14b033e694d1b132ab287ed1e707b3\": container with ID starting with c7bc618da6754784c597a18b80c331f1df14b033e694d1b132ab287ed1e707b3 not found: ID does not exist" Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.734776 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6z5l"] Jan 21 13:50:58 crc kubenswrapper[4765]: I0121 13:50:58.743453 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6z5l"] Jan 21 13:50:59 crc kubenswrapper[4765]: I0121 13:50:59.626242 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9b2e64-565f-4eee-8074-0fe6306728ea" path="/var/lib/kubelet/pods/8e9b2e64-565f-4eee-8074-0fe6306728ea/volumes" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.577104 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w6ld6"] Jan 21 13:51:26 crc kubenswrapper[4765]: E0121 13:51:26.578124 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9b2e64-565f-4eee-8074-0fe6306728ea" containerName="extract-content" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.578141 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9b2e64-565f-4eee-8074-0fe6306728ea" containerName="extract-content" Jan 21 13:51:26 crc kubenswrapper[4765]: E0121 13:51:26.578174 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9b2e64-565f-4eee-8074-0fe6306728ea" containerName="registry-server" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.578182 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9b2e64-565f-4eee-8074-0fe6306728ea" containerName="registry-server" Jan 21 13:51:26 crc kubenswrapper[4765]: E0121 13:51:26.578248 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9b2e64-565f-4eee-8074-0fe6306728ea" containerName="extract-utilities" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.578255 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9b2e64-565f-4eee-8074-0fe6306728ea" containerName="extract-utilities" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.578471 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9b2e64-565f-4eee-8074-0fe6306728ea" containerName="registry-server" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.580117 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.591411 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w6ld6"] Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.772695 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02fb43e6-a20f-463f-98db-4b81eec65e33-catalog-content\") pod \"community-operators-w6ld6\" (UID: \"02fb43e6-a20f-463f-98db-4b81eec65e33\") " pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.772758 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02fb43e6-a20f-463f-98db-4b81eec65e33-utilities\") pod \"community-operators-w6ld6\" (UID: \"02fb43e6-a20f-463f-98db-4b81eec65e33\") " pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.772839 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9m8s\" (UniqueName: \"kubernetes.io/projected/02fb43e6-a20f-463f-98db-4b81eec65e33-kube-api-access-g9m8s\") pod \"community-operators-w6ld6\" (UID: \"02fb43e6-a20f-463f-98db-4b81eec65e33\") " pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.874156 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02fb43e6-a20f-463f-98db-4b81eec65e33-utilities\") pod \"community-operators-w6ld6\" (UID: \"02fb43e6-a20f-463f-98db-4b81eec65e33\") " pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.874258 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9m8s\" (UniqueName: \"kubernetes.io/projected/02fb43e6-a20f-463f-98db-4b81eec65e33-kube-api-access-g9m8s\") pod \"community-operators-w6ld6\" (UID: \"02fb43e6-a20f-463f-98db-4b81eec65e33\") " pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.874374 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02fb43e6-a20f-463f-98db-4b81eec65e33-catalog-content\") pod \"community-operators-w6ld6\" (UID: \"02fb43e6-a20f-463f-98db-4b81eec65e33\") " pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.874690 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02fb43e6-a20f-463f-98db-4b81eec65e33-utilities\") pod \"community-operators-w6ld6\" (UID: \"02fb43e6-a20f-463f-98db-4b81eec65e33\") " pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.874744 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02fb43e6-a20f-463f-98db-4b81eec65e33-catalog-content\") pod \"community-operators-w6ld6\" (UID: \"02fb43e6-a20f-463f-98db-4b81eec65e33\") " pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.901510 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9m8s\" (UniqueName: \"kubernetes.io/projected/02fb43e6-a20f-463f-98db-4b81eec65e33-kube-api-access-g9m8s\") pod \"community-operators-w6ld6\" (UID: \"02fb43e6-a20f-463f-98db-4b81eec65e33\") " pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:26 crc kubenswrapper[4765]: I0121 13:51:26.924000 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:27 crc kubenswrapper[4765]: I0121 13:51:27.519831 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w6ld6"] Jan 21 13:51:27 crc kubenswrapper[4765]: I0121 13:51:27.733962 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6ld6" event={"ID":"02fb43e6-a20f-463f-98db-4b81eec65e33","Type":"ContainerStarted","Data":"40114003632b3306520290414030b542c399b24238788c03e3b2ac8e0bb7a6ce"} Jan 21 13:51:28 crc kubenswrapper[4765]: I0121 13:51:28.746083 4765 generic.go:334] "Generic (PLEG): container finished" podID="02fb43e6-a20f-463f-98db-4b81eec65e33" containerID="46dd2183510fff6cbf08774235664d526eaa80cdf2594f3f48f4be1c71615699" exitCode=0 Jan 21 13:51:28 crc kubenswrapper[4765]: I0121 13:51:28.747296 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6ld6" event={"ID":"02fb43e6-a20f-463f-98db-4b81eec65e33","Type":"ContainerDied","Data":"46dd2183510fff6cbf08774235664d526eaa80cdf2594f3f48f4be1c71615699"} Jan 21 13:51:30 crc kubenswrapper[4765]: I0121 13:51:30.777457 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6ld6" event={"ID":"02fb43e6-a20f-463f-98db-4b81eec65e33","Type":"ContainerStarted","Data":"fbe52aab6119ecb3cbe5495748f81887507d3d51fec6e695cfb32b51a8acc1af"} Jan 21 13:51:31 crc kubenswrapper[4765]: I0121 13:51:31.787800 4765 generic.go:334] "Generic (PLEG): container finished" podID="02fb43e6-a20f-463f-98db-4b81eec65e33" containerID="fbe52aab6119ecb3cbe5495748f81887507d3d51fec6e695cfb32b51a8acc1af" exitCode=0 Jan 21 13:51:31 crc kubenswrapper[4765]: I0121 13:51:31.787896 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6ld6" event={"ID":"02fb43e6-a20f-463f-98db-4b81eec65e33","Type":"ContainerDied","Data":"fbe52aab6119ecb3cbe5495748f81887507d3d51fec6e695cfb32b51a8acc1af"} Jan 21 13:51:32 crc kubenswrapper[4765]: I0121 13:51:32.798937 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6ld6" event={"ID":"02fb43e6-a20f-463f-98db-4b81eec65e33","Type":"ContainerStarted","Data":"d516acdc02a8d9726ede2f4603b590384951e42d6a00ecda4e072dd92fabffb3"} Jan 21 13:51:32 crc kubenswrapper[4765]: I0121 13:51:32.833971 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w6ld6" podStartSLOduration=3.29351302 podStartE2EDuration="6.833951096s" podCreationTimestamp="2026-01-21 13:51:26 +0000 UTC" firstStartedPulling="2026-01-21 13:51:28.750468755 +0000 UTC m=+2949.768194577" lastFinishedPulling="2026-01-21 13:51:32.290906831 +0000 UTC m=+2953.308632653" observedRunningTime="2026-01-21 13:51:32.823160164 +0000 UTC m=+2953.840885986" watchObservedRunningTime="2026-01-21 13:51:32.833951096 +0000 UTC m=+2953.851676918" Jan 21 13:51:36 crc kubenswrapper[4765]: I0121 13:51:36.925026 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:36 crc kubenswrapper[4765]: I0121 13:51:36.925679 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:36 crc kubenswrapper[4765]: I0121 13:51:36.979085 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:37 crc kubenswrapper[4765]: I0121 13:51:37.885943 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:37 crc kubenswrapper[4765]: I0121 13:51:37.942569 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w6ld6"] Jan 21 13:51:39 crc kubenswrapper[4765]: I0121 13:51:39.858322 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w6ld6" podUID="02fb43e6-a20f-463f-98db-4b81eec65e33" containerName="registry-server" containerID="cri-o://d516acdc02a8d9726ede2f4603b590384951e42d6a00ecda4e072dd92fabffb3" gracePeriod=2 Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.325593 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.460579 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02fb43e6-a20f-463f-98db-4b81eec65e33-catalog-content\") pod \"02fb43e6-a20f-463f-98db-4b81eec65e33\" (UID: \"02fb43e6-a20f-463f-98db-4b81eec65e33\") " Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.460683 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02fb43e6-a20f-463f-98db-4b81eec65e33-utilities\") pod \"02fb43e6-a20f-463f-98db-4b81eec65e33\" (UID: \"02fb43e6-a20f-463f-98db-4b81eec65e33\") " Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.460888 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9m8s\" (UniqueName: \"kubernetes.io/projected/02fb43e6-a20f-463f-98db-4b81eec65e33-kube-api-access-g9m8s\") pod \"02fb43e6-a20f-463f-98db-4b81eec65e33\" (UID: \"02fb43e6-a20f-463f-98db-4b81eec65e33\") " Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.463311 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02fb43e6-a20f-463f-98db-4b81eec65e33-utilities" (OuterVolumeSpecName: "utilities") pod "02fb43e6-a20f-463f-98db-4b81eec65e33" (UID: "02fb43e6-a20f-463f-98db-4b81eec65e33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.470979 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02fb43e6-a20f-463f-98db-4b81eec65e33-kube-api-access-g9m8s" (OuterVolumeSpecName: "kube-api-access-g9m8s") pod "02fb43e6-a20f-463f-98db-4b81eec65e33" (UID: "02fb43e6-a20f-463f-98db-4b81eec65e33"). InnerVolumeSpecName "kube-api-access-g9m8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.532653 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02fb43e6-a20f-463f-98db-4b81eec65e33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02fb43e6-a20f-463f-98db-4b81eec65e33" (UID: "02fb43e6-a20f-463f-98db-4b81eec65e33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.588301 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9m8s\" (UniqueName: \"kubernetes.io/projected/02fb43e6-a20f-463f-98db-4b81eec65e33-kube-api-access-g9m8s\") on node \"crc\" DevicePath \"\"" Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.588363 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02fb43e6-a20f-463f-98db-4b81eec65e33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.588377 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02fb43e6-a20f-463f-98db-4b81eec65e33-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.868863 4765 generic.go:334] "Generic (PLEG): container finished" podID="02fb43e6-a20f-463f-98db-4b81eec65e33" containerID="d516acdc02a8d9726ede2f4603b590384951e42d6a00ecda4e072dd92fabffb3" exitCode=0 Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.868907 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6ld6" event={"ID":"02fb43e6-a20f-463f-98db-4b81eec65e33","Type":"ContainerDied","Data":"d516acdc02a8d9726ede2f4603b590384951e42d6a00ecda4e072dd92fabffb3"} Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.868941 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6ld6" event={"ID":"02fb43e6-a20f-463f-98db-4b81eec65e33","Type":"ContainerDied","Data":"40114003632b3306520290414030b542c399b24238788c03e3b2ac8e0bb7a6ce"} Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.868962 4765 scope.go:117] "RemoveContainer" containerID="d516acdc02a8d9726ede2f4603b590384951e42d6a00ecda4e072dd92fabffb3" Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.869110 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6ld6" Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.923174 4765 scope.go:117] "RemoveContainer" containerID="fbe52aab6119ecb3cbe5495748f81887507d3d51fec6e695cfb32b51a8acc1af" Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.954538 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w6ld6"] Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.963083 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w6ld6"] Jan 21 13:51:40 crc kubenswrapper[4765]: I0121 13:51:40.965609 4765 scope.go:117] "RemoveContainer" containerID="46dd2183510fff6cbf08774235664d526eaa80cdf2594f3f48f4be1c71615699" Jan 21 13:51:41 crc kubenswrapper[4765]: I0121 13:51:41.027104 4765 scope.go:117] "RemoveContainer" containerID="d516acdc02a8d9726ede2f4603b590384951e42d6a00ecda4e072dd92fabffb3" Jan 21 13:51:41 crc kubenswrapper[4765]: E0121 13:51:41.027606 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d516acdc02a8d9726ede2f4603b590384951e42d6a00ecda4e072dd92fabffb3\": container with ID starting with d516acdc02a8d9726ede2f4603b590384951e42d6a00ecda4e072dd92fabffb3 not found: ID does not exist" containerID="d516acdc02a8d9726ede2f4603b590384951e42d6a00ecda4e072dd92fabffb3" Jan 21 13:51:41 crc kubenswrapper[4765]: I0121 13:51:41.027660 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d516acdc02a8d9726ede2f4603b590384951e42d6a00ecda4e072dd92fabffb3"} err="failed to get container status \"d516acdc02a8d9726ede2f4603b590384951e42d6a00ecda4e072dd92fabffb3\": rpc error: code = NotFound desc = could not find container \"d516acdc02a8d9726ede2f4603b590384951e42d6a00ecda4e072dd92fabffb3\": container with ID starting with d516acdc02a8d9726ede2f4603b590384951e42d6a00ecda4e072dd92fabffb3 not found: ID does not exist" Jan 21 13:51:41 crc kubenswrapper[4765]: I0121 13:51:41.027686 4765 scope.go:117] "RemoveContainer" containerID="fbe52aab6119ecb3cbe5495748f81887507d3d51fec6e695cfb32b51a8acc1af" Jan 21 13:51:41 crc kubenswrapper[4765]: E0121 13:51:41.027993 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbe52aab6119ecb3cbe5495748f81887507d3d51fec6e695cfb32b51a8acc1af\": container with ID starting with fbe52aab6119ecb3cbe5495748f81887507d3d51fec6e695cfb32b51a8acc1af not found: ID does not exist" containerID="fbe52aab6119ecb3cbe5495748f81887507d3d51fec6e695cfb32b51a8acc1af" Jan 21 13:51:41 crc kubenswrapper[4765]: I0121 13:51:41.028020 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbe52aab6119ecb3cbe5495748f81887507d3d51fec6e695cfb32b51a8acc1af"} err="failed to get container status \"fbe52aab6119ecb3cbe5495748f81887507d3d51fec6e695cfb32b51a8acc1af\": rpc error: code = NotFound desc = could not find container \"fbe52aab6119ecb3cbe5495748f81887507d3d51fec6e695cfb32b51a8acc1af\": container with ID starting with fbe52aab6119ecb3cbe5495748f81887507d3d51fec6e695cfb32b51a8acc1af not found: ID does not exist" Jan 21 13:51:41 crc kubenswrapper[4765]: I0121 13:51:41.028036 4765 scope.go:117] "RemoveContainer" containerID="46dd2183510fff6cbf08774235664d526eaa80cdf2594f3f48f4be1c71615699" Jan 21 13:51:41 crc kubenswrapper[4765]: E0121 13:51:41.028433 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46dd2183510fff6cbf08774235664d526eaa80cdf2594f3f48f4be1c71615699\": container with ID starting with 46dd2183510fff6cbf08774235664d526eaa80cdf2594f3f48f4be1c71615699 not found: ID does not exist" containerID="46dd2183510fff6cbf08774235664d526eaa80cdf2594f3f48f4be1c71615699" Jan 21 13:51:41 crc kubenswrapper[4765]: I0121 13:51:41.028459 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46dd2183510fff6cbf08774235664d526eaa80cdf2594f3f48f4be1c71615699"} err="failed to get container status \"46dd2183510fff6cbf08774235664d526eaa80cdf2594f3f48f4be1c71615699\": rpc error: code = NotFound desc = could not find container \"46dd2183510fff6cbf08774235664d526eaa80cdf2594f3f48f4be1c71615699\": container with ID starting with 46dd2183510fff6cbf08774235664d526eaa80cdf2594f3f48f4be1c71615699 not found: ID does not exist" Jan 21 13:51:41 crc kubenswrapper[4765]: I0121 13:51:41.630135 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02fb43e6-a20f-463f-98db-4b81eec65e33" path="/var/lib/kubelet/pods/02fb43e6-a20f-463f-98db-4b81eec65e33/volumes" Jan 21 13:51:44 crc kubenswrapper[4765]: I0121 13:51:44.445741 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:51:44 crc kubenswrapper[4765]: I0121 13:51:44.446086 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:52:07 crc kubenswrapper[4765]: I0121 13:52:07.829199 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vhjm4"] Jan 21 13:52:07 crc kubenswrapper[4765]: E0121 13:52:07.830289 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02fb43e6-a20f-463f-98db-4b81eec65e33" containerName="extract-content" Jan 21 13:52:07 crc kubenswrapper[4765]: I0121 13:52:07.830305 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="02fb43e6-a20f-463f-98db-4b81eec65e33" containerName="extract-content" Jan 21 13:52:07 crc kubenswrapper[4765]: E0121 13:52:07.830325 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02fb43e6-a20f-463f-98db-4b81eec65e33" containerName="registry-server" Jan 21 13:52:07 crc kubenswrapper[4765]: I0121 13:52:07.830334 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="02fb43e6-a20f-463f-98db-4b81eec65e33" containerName="registry-server" Jan 21 13:52:07 crc kubenswrapper[4765]: E0121 13:52:07.830355 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02fb43e6-a20f-463f-98db-4b81eec65e33" containerName="extract-utilities" Jan 21 13:52:07 crc kubenswrapper[4765]: I0121 13:52:07.830365 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="02fb43e6-a20f-463f-98db-4b81eec65e33" containerName="extract-utilities" Jan 21 13:52:07 crc kubenswrapper[4765]: I0121 13:52:07.830623 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="02fb43e6-a20f-463f-98db-4b81eec65e33" containerName="registry-server" Jan 21 13:52:07 crc kubenswrapper[4765]: I0121 13:52:07.832386 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:07 crc kubenswrapper[4765]: I0121 13:52:07.841209 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vhjm4"] Jan 21 13:52:07 crc kubenswrapper[4765]: I0121 13:52:07.955991 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnw5z\" (UniqueName: \"kubernetes.io/projected/1fe94c5d-805d-4cf8-812e-e12c707022e7-kube-api-access-mnw5z\") pod \"certified-operators-vhjm4\" (UID: \"1fe94c5d-805d-4cf8-812e-e12c707022e7\") " pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:07 crc kubenswrapper[4765]: I0121 13:52:07.956060 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe94c5d-805d-4cf8-812e-e12c707022e7-catalog-content\") pod \"certified-operators-vhjm4\" (UID: \"1fe94c5d-805d-4cf8-812e-e12c707022e7\") " pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:07 crc kubenswrapper[4765]: I0121 13:52:07.956274 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe94c5d-805d-4cf8-812e-e12c707022e7-utilities\") pod \"certified-operators-vhjm4\" (UID: \"1fe94c5d-805d-4cf8-812e-e12c707022e7\") " pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:08 crc kubenswrapper[4765]: I0121 13:52:08.058051 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe94c5d-805d-4cf8-812e-e12c707022e7-utilities\") pod \"certified-operators-vhjm4\" (UID: \"1fe94c5d-805d-4cf8-812e-e12c707022e7\") " pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:08 crc kubenswrapper[4765]: I0121 13:52:08.058422 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnw5z\" (UniqueName: \"kubernetes.io/projected/1fe94c5d-805d-4cf8-812e-e12c707022e7-kube-api-access-mnw5z\") pod \"certified-operators-vhjm4\" (UID: \"1fe94c5d-805d-4cf8-812e-e12c707022e7\") " pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:08 crc kubenswrapper[4765]: I0121 13:52:08.058451 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe94c5d-805d-4cf8-812e-e12c707022e7-catalog-content\") pod \"certified-operators-vhjm4\" (UID: \"1fe94c5d-805d-4cf8-812e-e12c707022e7\") " pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:08 crc kubenswrapper[4765]: I0121 13:52:08.058600 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe94c5d-805d-4cf8-812e-e12c707022e7-utilities\") pod \"certified-operators-vhjm4\" (UID: \"1fe94c5d-805d-4cf8-812e-e12c707022e7\") " pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:08 crc kubenswrapper[4765]: I0121 13:52:08.058832 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe94c5d-805d-4cf8-812e-e12c707022e7-catalog-content\") pod \"certified-operators-vhjm4\" (UID: \"1fe94c5d-805d-4cf8-812e-e12c707022e7\") " pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:08 crc kubenswrapper[4765]: I0121 13:52:08.080426 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnw5z\" (UniqueName: \"kubernetes.io/projected/1fe94c5d-805d-4cf8-812e-e12c707022e7-kube-api-access-mnw5z\") pod \"certified-operators-vhjm4\" (UID: \"1fe94c5d-805d-4cf8-812e-e12c707022e7\") " pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:08 crc kubenswrapper[4765]: I0121 13:52:08.153372 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:08 crc kubenswrapper[4765]: I0121 13:52:08.725838 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vhjm4"] Jan 21 13:52:09 crc kubenswrapper[4765]: I0121 13:52:09.184498 4765 generic.go:334] "Generic (PLEG): container finished" podID="1fe94c5d-805d-4cf8-812e-e12c707022e7" containerID="0179bd7b3fab4b094bed43e57602d6cc325fb6a7355fef692c24ce37ca781291" exitCode=0 Jan 21 13:52:09 crc kubenswrapper[4765]: I0121 13:52:09.184713 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhjm4" event={"ID":"1fe94c5d-805d-4cf8-812e-e12c707022e7","Type":"ContainerDied","Data":"0179bd7b3fab4b094bed43e57602d6cc325fb6a7355fef692c24ce37ca781291"} Jan 21 13:52:09 crc kubenswrapper[4765]: I0121 13:52:09.184740 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhjm4" event={"ID":"1fe94c5d-805d-4cf8-812e-e12c707022e7","Type":"ContainerStarted","Data":"3664fea96baaaeebf7ac0625e78bcec035301fe87aeed2ca576b3861a8b8e9d5"} Jan 21 13:52:10 crc kubenswrapper[4765]: I0121 13:52:10.197075 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhjm4" event={"ID":"1fe94c5d-805d-4cf8-812e-e12c707022e7","Type":"ContainerStarted","Data":"b8075b5404f46a940f035db15c5a0d0976b6891fc14414b78a987912da3955f8"} Jan 21 13:52:11 crc kubenswrapper[4765]: I0121 13:52:11.208320 4765 generic.go:334] "Generic (PLEG): container finished" podID="1fe94c5d-805d-4cf8-812e-e12c707022e7" containerID="b8075b5404f46a940f035db15c5a0d0976b6891fc14414b78a987912da3955f8" exitCode=0 Jan 21 13:52:11 crc kubenswrapper[4765]: I0121 13:52:11.208393 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhjm4" event={"ID":"1fe94c5d-805d-4cf8-812e-e12c707022e7","Type":"ContainerDied","Data":"b8075b5404f46a940f035db15c5a0d0976b6891fc14414b78a987912da3955f8"} Jan 21 13:52:12 crc kubenswrapper[4765]: I0121 13:52:12.220048 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhjm4" event={"ID":"1fe94c5d-805d-4cf8-812e-e12c707022e7","Type":"ContainerStarted","Data":"f6b0f0b6c8b5ffcd0d65e9b62a75b3e5b7d7363d096b5eb2fa66f23a2e482425"} Jan 21 13:52:12 crc kubenswrapper[4765]: I0121 13:52:12.243789 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vhjm4" podStartSLOduration=2.773640142 podStartE2EDuration="5.243769824s" podCreationTimestamp="2026-01-21 13:52:07 +0000 UTC" firstStartedPulling="2026-01-21 13:52:09.186348599 +0000 UTC m=+2990.204074421" lastFinishedPulling="2026-01-21 13:52:11.656478281 +0000 UTC m=+2992.674204103" observedRunningTime="2026-01-21 13:52:12.241646313 +0000 UTC m=+2993.259372155" watchObservedRunningTime="2026-01-21 13:52:12.243769824 +0000 UTC m=+2993.261495646" Jan 21 13:52:14 crc kubenswrapper[4765]: I0121 13:52:14.446612 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:52:14 crc kubenswrapper[4765]: I0121 13:52:14.447035 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:52:18 crc kubenswrapper[4765]: I0121 13:52:18.153994 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:18 crc kubenswrapper[4765]: I0121 13:52:18.154682 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:18 crc kubenswrapper[4765]: I0121 13:52:18.212045 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:18 crc kubenswrapper[4765]: I0121 13:52:18.328161 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:18 crc kubenswrapper[4765]: I0121 13:52:18.474280 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vhjm4"] Jan 21 13:52:20 crc kubenswrapper[4765]: I0121 13:52:20.499187 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vhjm4" podUID="1fe94c5d-805d-4cf8-812e-e12c707022e7" containerName="registry-server" containerID="cri-o://f6b0f0b6c8b5ffcd0d65e9b62a75b3e5b7d7363d096b5eb2fa66f23a2e482425" gracePeriod=2 Jan 21 13:52:20 crc kubenswrapper[4765]: I0121 13:52:20.902448 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.017566 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe94c5d-805d-4cf8-812e-e12c707022e7-catalog-content\") pod \"1fe94c5d-805d-4cf8-812e-e12c707022e7\" (UID: \"1fe94c5d-805d-4cf8-812e-e12c707022e7\") " Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.017608 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe94c5d-805d-4cf8-812e-e12c707022e7-utilities\") pod \"1fe94c5d-805d-4cf8-812e-e12c707022e7\" (UID: \"1fe94c5d-805d-4cf8-812e-e12c707022e7\") " Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.017632 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnw5z\" (UniqueName: \"kubernetes.io/projected/1fe94c5d-805d-4cf8-812e-e12c707022e7-kube-api-access-mnw5z\") pod \"1fe94c5d-805d-4cf8-812e-e12c707022e7\" (UID: \"1fe94c5d-805d-4cf8-812e-e12c707022e7\") " Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.018275 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fe94c5d-805d-4cf8-812e-e12c707022e7-utilities" (OuterVolumeSpecName: "utilities") pod "1fe94c5d-805d-4cf8-812e-e12c707022e7" (UID: "1fe94c5d-805d-4cf8-812e-e12c707022e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.022771 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fe94c5d-805d-4cf8-812e-e12c707022e7-kube-api-access-mnw5z" (OuterVolumeSpecName: "kube-api-access-mnw5z") pod "1fe94c5d-805d-4cf8-812e-e12c707022e7" (UID: "1fe94c5d-805d-4cf8-812e-e12c707022e7"). InnerVolumeSpecName "kube-api-access-mnw5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.073476 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fe94c5d-805d-4cf8-812e-e12c707022e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fe94c5d-805d-4cf8-812e-e12c707022e7" (UID: "1fe94c5d-805d-4cf8-812e-e12c707022e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.119929 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fe94c5d-805d-4cf8-812e-e12c707022e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.119965 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fe94c5d-805d-4cf8-812e-e12c707022e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.119976 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnw5z\" (UniqueName: \"kubernetes.io/projected/1fe94c5d-805d-4cf8-812e-e12c707022e7-kube-api-access-mnw5z\") on node \"crc\" DevicePath \"\"" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.511498 4765 generic.go:334] "Generic (PLEG): container finished" podID="1fe94c5d-805d-4cf8-812e-e12c707022e7" containerID="f6b0f0b6c8b5ffcd0d65e9b62a75b3e5b7d7363d096b5eb2fa66f23a2e482425" exitCode=0 Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.511549 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhjm4" event={"ID":"1fe94c5d-805d-4cf8-812e-e12c707022e7","Type":"ContainerDied","Data":"f6b0f0b6c8b5ffcd0d65e9b62a75b3e5b7d7363d096b5eb2fa66f23a2e482425"} Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.511579 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhjm4" event={"ID":"1fe94c5d-805d-4cf8-812e-e12c707022e7","Type":"ContainerDied","Data":"3664fea96baaaeebf7ac0625e78bcec035301fe87aeed2ca576b3861a8b8e9d5"} Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.511599 4765 scope.go:117] "RemoveContainer" containerID="f6b0f0b6c8b5ffcd0d65e9b62a75b3e5b7d7363d096b5eb2fa66f23a2e482425" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.511752 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhjm4" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.543118 4765 scope.go:117] "RemoveContainer" containerID="b8075b5404f46a940f035db15c5a0d0976b6891fc14414b78a987912da3955f8" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.551646 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vhjm4"] Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.558812 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vhjm4"] Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.570060 4765 scope.go:117] "RemoveContainer" containerID="0179bd7b3fab4b094bed43e57602d6cc325fb6a7355fef692c24ce37ca781291" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.599342 4765 scope.go:117] "RemoveContainer" containerID="f6b0f0b6c8b5ffcd0d65e9b62a75b3e5b7d7363d096b5eb2fa66f23a2e482425" Jan 21 13:52:21 crc kubenswrapper[4765]: E0121 13:52:21.599919 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b0f0b6c8b5ffcd0d65e9b62a75b3e5b7d7363d096b5eb2fa66f23a2e482425\": container with ID starting with f6b0f0b6c8b5ffcd0d65e9b62a75b3e5b7d7363d096b5eb2fa66f23a2e482425 not found: ID does not exist" containerID="f6b0f0b6c8b5ffcd0d65e9b62a75b3e5b7d7363d096b5eb2fa66f23a2e482425" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.599975 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b0f0b6c8b5ffcd0d65e9b62a75b3e5b7d7363d096b5eb2fa66f23a2e482425"} err="failed to get container status \"f6b0f0b6c8b5ffcd0d65e9b62a75b3e5b7d7363d096b5eb2fa66f23a2e482425\": rpc error: code = NotFound desc = could not find container \"f6b0f0b6c8b5ffcd0d65e9b62a75b3e5b7d7363d096b5eb2fa66f23a2e482425\": container with ID starting with f6b0f0b6c8b5ffcd0d65e9b62a75b3e5b7d7363d096b5eb2fa66f23a2e482425 not found: ID does not exist" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.600002 4765 scope.go:117] "RemoveContainer" containerID="b8075b5404f46a940f035db15c5a0d0976b6891fc14414b78a987912da3955f8" Jan 21 13:52:21 crc kubenswrapper[4765]: E0121 13:52:21.600460 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8075b5404f46a940f035db15c5a0d0976b6891fc14414b78a987912da3955f8\": container with ID starting with b8075b5404f46a940f035db15c5a0d0976b6891fc14414b78a987912da3955f8 not found: ID does not exist" containerID="b8075b5404f46a940f035db15c5a0d0976b6891fc14414b78a987912da3955f8" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.600489 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8075b5404f46a940f035db15c5a0d0976b6891fc14414b78a987912da3955f8"} err="failed to get container status \"b8075b5404f46a940f035db15c5a0d0976b6891fc14414b78a987912da3955f8\": rpc error: code = NotFound desc = could not find container \"b8075b5404f46a940f035db15c5a0d0976b6891fc14414b78a987912da3955f8\": container with ID starting with b8075b5404f46a940f035db15c5a0d0976b6891fc14414b78a987912da3955f8 not found: ID does not exist" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.600512 4765 scope.go:117] "RemoveContainer" containerID="0179bd7b3fab4b094bed43e57602d6cc325fb6a7355fef692c24ce37ca781291" Jan 21 13:52:21 crc kubenswrapper[4765]: E0121 13:52:21.600981 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0179bd7b3fab4b094bed43e57602d6cc325fb6a7355fef692c24ce37ca781291\": container with ID starting with 0179bd7b3fab4b094bed43e57602d6cc325fb6a7355fef692c24ce37ca781291 not found: ID does not exist" containerID="0179bd7b3fab4b094bed43e57602d6cc325fb6a7355fef692c24ce37ca781291" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.601009 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0179bd7b3fab4b094bed43e57602d6cc325fb6a7355fef692c24ce37ca781291"} err="failed to get container status \"0179bd7b3fab4b094bed43e57602d6cc325fb6a7355fef692c24ce37ca781291\": rpc error: code = NotFound desc = could not find container \"0179bd7b3fab4b094bed43e57602d6cc325fb6a7355fef692c24ce37ca781291\": container with ID starting with 0179bd7b3fab4b094bed43e57602d6cc325fb6a7355fef692c24ce37ca781291 not found: ID does not exist" Jan 21 13:52:21 crc kubenswrapper[4765]: I0121 13:52:21.632230 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fe94c5d-805d-4cf8-812e-e12c707022e7" path="/var/lib/kubelet/pods/1fe94c5d-805d-4cf8-812e-e12c707022e7/volumes" Jan 21 13:52:44 crc kubenswrapper[4765]: I0121 13:52:44.446483 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:52:44 crc kubenswrapper[4765]: I0121 13:52:44.447036 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:52:44 crc kubenswrapper[4765]: I0121 13:52:44.447083 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 13:52:44 crc kubenswrapper[4765]: I0121 13:52:44.447959 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:52:44 crc kubenswrapper[4765]: I0121 13:52:44.448032 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" gracePeriod=600 Jan 21 13:52:44 crc kubenswrapper[4765]: E0121 13:52:44.579598 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:52:44 crc kubenswrapper[4765]: I0121 13:52:44.711674 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" exitCode=0 Jan 21 13:52:44 crc kubenswrapper[4765]: I0121 13:52:44.711728 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217"} Jan 21 13:52:44 crc kubenswrapper[4765]: I0121 13:52:44.711775 4765 scope.go:117] "RemoveContainer" containerID="289eeaf139eef3057016b49afbca88f96cc90417dca0a155ef85620d4bfd08bb" Jan 21 13:52:44 crc kubenswrapper[4765]: I0121 13:52:44.712473 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:52:44 crc kubenswrapper[4765]: E0121 13:52:44.712716 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:52:58 crc kubenswrapper[4765]: I0121 13:52:58.613942 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:52:58 crc kubenswrapper[4765]: E0121 13:52:58.614695 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:52:59 crc kubenswrapper[4765]: I0121 13:52:59.842675 4765 generic.go:334] "Generic (PLEG): container finished" podID="72b52054-c641-4cfb-9e83-f5b6794f77de" containerID="f4a5177d560ce63dd1f1bb6df1ae5c1efed0183c78bd856ad86e674c7cd99d09" exitCode=0 Jan 21 13:52:59 crc kubenswrapper[4765]: I0121 13:52:59.842756 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" event={"ID":"72b52054-c641-4cfb-9e83-f5b6794f77de","Type":"ContainerDied","Data":"f4a5177d560ce63dd1f1bb6df1ae5c1efed0183c78bd856ad86e674c7cd99d09"} Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.287865 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.368054 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-0\") pod \"72b52054-c641-4cfb-9e83-f5b6794f77de\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.368139 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-2\") pod \"72b52054-c641-4cfb-9e83-f5b6794f77de\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.368195 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-1\") pod \"72b52054-c641-4cfb-9e83-f5b6794f77de\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.368270 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl582\" (UniqueName: \"kubernetes.io/projected/72b52054-c641-4cfb-9e83-f5b6794f77de-kube-api-access-tl582\") pod \"72b52054-c641-4cfb-9e83-f5b6794f77de\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.368330 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-inventory\") pod \"72b52054-c641-4cfb-9e83-f5b6794f77de\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.368381 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-telemetry-combined-ca-bundle\") pod \"72b52054-c641-4cfb-9e83-f5b6794f77de\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.368569 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ssh-key-openstack-edpm-ipam\") pod \"72b52054-c641-4cfb-9e83-f5b6794f77de\" (UID: \"72b52054-c641-4cfb-9e83-f5b6794f77de\") " Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.378516 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "72b52054-c641-4cfb-9e83-f5b6794f77de" (UID: "72b52054-c641-4cfb-9e83-f5b6794f77de"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.402156 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72b52054-c641-4cfb-9e83-f5b6794f77de-kube-api-access-tl582" (OuterVolumeSpecName: "kube-api-access-tl582") pod "72b52054-c641-4cfb-9e83-f5b6794f77de" (UID: "72b52054-c641-4cfb-9e83-f5b6794f77de"). InnerVolumeSpecName "kube-api-access-tl582". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.542057 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl582\" (UniqueName: \"kubernetes.io/projected/72b52054-c641-4cfb-9e83-f5b6794f77de-kube-api-access-tl582\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.542090 4765 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.550714 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-inventory" (OuterVolumeSpecName: "inventory") pod "72b52054-c641-4cfb-9e83-f5b6794f77de" (UID: "72b52054-c641-4cfb-9e83-f5b6794f77de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.577258 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "72b52054-c641-4cfb-9e83-f5b6794f77de" (UID: "72b52054-c641-4cfb-9e83-f5b6794f77de"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.584635 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "72b52054-c641-4cfb-9e83-f5b6794f77de" (UID: "72b52054-c641-4cfb-9e83-f5b6794f77de"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.602916 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "72b52054-c641-4cfb-9e83-f5b6794f77de" (UID: "72b52054-c641-4cfb-9e83-f5b6794f77de"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.615432 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "72b52054-c641-4cfb-9e83-f5b6794f77de" (UID: "72b52054-c641-4cfb-9e83-f5b6794f77de"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.643117 4765 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.643745 4765 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.643820 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.643902 4765 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.643960 4765 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/72b52054-c641-4cfb-9e83-f5b6794f77de-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.859152 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" event={"ID":"72b52054-c641-4cfb-9e83-f5b6794f77de","Type":"ContainerDied","Data":"6217153074e42459649180a7bde1f22ec95794b07dbf775089a641f8a3eedd3b"} Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.859511 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6217153074e42459649180a7bde1f22ec95794b07dbf775089a641f8a3eedd3b" Jan 21 13:53:01 crc kubenswrapper[4765]: I0121 13:53:01.859574 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb" Jan 21 13:53:13 crc kubenswrapper[4765]: I0121 13:53:13.614100 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:53:13 crc kubenswrapper[4765]: E0121 13:53:13.614933 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:53:27 crc kubenswrapper[4765]: I0121 13:53:27.614806 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:53:27 crc kubenswrapper[4765]: E0121 13:53:27.618148 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:53:38 crc kubenswrapper[4765]: I0121 13:53:38.614601 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:53:38 crc kubenswrapper[4765]: E0121 13:53:38.615466 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:53:53 crc kubenswrapper[4765]: I0121 13:53:53.614178 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:53:53 crc kubenswrapper[4765]: E0121 13:53:53.615671 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.865519 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 13:53:56 crc kubenswrapper[4765]: E0121 13:53:56.866899 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe94c5d-805d-4cf8-812e-e12c707022e7" containerName="extract-content" Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.866921 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe94c5d-805d-4cf8-812e-e12c707022e7" containerName="extract-content" Jan 21 13:53:56 crc kubenswrapper[4765]: E0121 13:53:56.867078 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72b52054-c641-4cfb-9e83-f5b6794f77de" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.867092 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="72b52054-c641-4cfb-9e83-f5b6794f77de" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 13:53:56 crc kubenswrapper[4765]: E0121 13:53:56.867133 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe94c5d-805d-4cf8-812e-e12c707022e7" containerName="registry-server" Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.867142 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe94c5d-805d-4cf8-812e-e12c707022e7" containerName="registry-server" Jan 21 13:53:56 crc kubenswrapper[4765]: E0121 13:53:56.867163 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe94c5d-805d-4cf8-812e-e12c707022e7" containerName="extract-utilities" Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.867173 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe94c5d-805d-4cf8-812e-e12c707022e7" containerName="extract-utilities" Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.867438 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="72b52054-c641-4cfb-9e83-f5b6794f77de" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.867474 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe94c5d-805d-4cf8-812e-e12c707022e7" containerName="registry-server" Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.868262 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.873156 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.873445 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.873625 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.874307 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qj64x" Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.875244 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.907652 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a8700b-dcb3-42d5-9655-61f2c977e9e2-config-data\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.907742 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/65a8700b-dcb3-42d5-9655-61f2c977e9e2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:56 crc kubenswrapper[4765]: I0121 13:53:56.908241 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.011472 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxcrf\" (UniqueName: \"kubernetes.io/projected/65a8700b-dcb3-42d5-9655-61f2c977e9e2-kube-api-access-sxcrf\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.011522 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/65a8700b-dcb3-42d5-9655-61f2c977e9e2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.011548 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a8700b-dcb3-42d5-9655-61f2c977e9e2-config-data\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.011648 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/65a8700b-dcb3-42d5-9655-61f2c977e9e2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.011683 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.011768 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.011827 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/65a8700b-dcb3-42d5-9655-61f2c977e9e2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.011882 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.011922 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.012856 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a8700b-dcb3-42d5-9655-61f2c977e9e2-config-data\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.013764 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/65a8700b-dcb3-42d5-9655-61f2c977e9e2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.019812 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.113728 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.113840 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxcrf\" (UniqueName: \"kubernetes.io/projected/65a8700b-dcb3-42d5-9655-61f2c977e9e2-kube-api-access-sxcrf\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.113880 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/65a8700b-dcb3-42d5-9655-61f2c977e9e2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.113941 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.114022 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.114053 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/65a8700b-dcb3-42d5-9655-61f2c977e9e2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.114641 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/65a8700b-dcb3-42d5-9655-61f2c977e9e2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.115504 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/65a8700b-dcb3-42d5-9655-61f2c977e9e2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.115806 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.116541 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.117694 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.136060 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxcrf\" (UniqueName: \"kubernetes.io/projected/65a8700b-dcb3-42d5-9655-61f2c977e9e2-kube-api-access-sxcrf\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.140430 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.196187 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.756740 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 13:53:57 crc kubenswrapper[4765]: I0121 13:53:57.762962 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:53:58 crc kubenswrapper[4765]: I0121 13:53:58.741765 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"65a8700b-dcb3-42d5-9655-61f2c977e9e2","Type":"ContainerStarted","Data":"1c8604ba9b8192b80f2fc07c4a36bec339fa10802e3e0692fac189a4bd0029db"} Jan 21 13:54:06 crc kubenswrapper[4765]: I0121 13:54:06.649572 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:54:06 crc kubenswrapper[4765]: E0121 13:54:06.650491 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:54:17 crc kubenswrapper[4765]: I0121 13:54:17.615158 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:54:17 crc kubenswrapper[4765]: E0121 13:54:17.616114 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:54:32 crc kubenswrapper[4765]: I0121 13:54:32.613449 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:54:32 crc kubenswrapper[4765]: E0121 13:54:32.614236 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:54:33 crc kubenswrapper[4765]: E0121 13:54:33.747424 4765 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 21 13:54:33 crc kubenswrapper[4765]: E0121 13:54:33.747597 4765 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxcrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(65a8700b-dcb3-42d5-9655-61f2c977e9e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 13:54:33 crc kubenswrapper[4765]: E0121 13:54:33.749464 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="65a8700b-dcb3-42d5-9655-61f2c977e9e2" Jan 21 13:54:34 crc kubenswrapper[4765]: E0121 13:54:34.083699 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="65a8700b-dcb3-42d5-9655-61f2c977e9e2" Jan 21 13:54:47 crc kubenswrapper[4765]: I0121 13:54:47.148712 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 13:54:47 crc kubenswrapper[4765]: I0121 13:54:47.613721 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:54:47 crc kubenswrapper[4765]: E0121 13:54:47.614328 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:54:49 crc kubenswrapper[4765]: I0121 13:54:49.241239 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"65a8700b-dcb3-42d5-9655-61f2c977e9e2","Type":"ContainerStarted","Data":"e98b372174a04458dcbd814a8d6f7a7aea911c452dda84e8605217ee7153ff2a"} Jan 21 13:54:49 crc kubenswrapper[4765]: I0121 13:54:49.265261 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.882011163 podStartE2EDuration="54.265244202s" podCreationTimestamp="2026-01-21 13:53:55 +0000 UTC" firstStartedPulling="2026-01-21 13:53:57.762739944 +0000 UTC m=+3098.780465766" lastFinishedPulling="2026-01-21 13:54:47.145972983 +0000 UTC m=+3148.163698805" observedRunningTime="2026-01-21 13:54:49.257804767 +0000 UTC m=+3150.275530629" watchObservedRunningTime="2026-01-21 13:54:49.265244202 +0000 UTC m=+3150.282970024" Jan 21 13:55:00 crc kubenswrapper[4765]: I0121 13:55:00.614245 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:55:00 crc kubenswrapper[4765]: E0121 13:55:00.615172 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:55:12 crc kubenswrapper[4765]: I0121 13:55:12.613134 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:55:12 crc kubenswrapper[4765]: E0121 13:55:12.614047 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:55:23 crc kubenswrapper[4765]: I0121 13:55:23.626273 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:55:23 crc kubenswrapper[4765]: E0121 13:55:23.628386 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:55:34 crc kubenswrapper[4765]: I0121 13:55:34.614920 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:55:34 crc kubenswrapper[4765]: E0121 13:55:34.615725 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:55:49 crc kubenswrapper[4765]: I0121 13:55:49.613571 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:55:49 crc kubenswrapper[4765]: E0121 13:55:49.615195 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:56:00 crc kubenswrapper[4765]: I0121 13:56:00.613628 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:56:00 crc kubenswrapper[4765]: E0121 13:56:00.614413 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:56:14 crc kubenswrapper[4765]: I0121 13:56:14.614699 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:56:14 crc kubenswrapper[4765]: E0121 13:56:14.615438 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:56:27 crc kubenswrapper[4765]: I0121 13:56:27.613457 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:56:27 crc kubenswrapper[4765]: E0121 13:56:27.614250 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:56:41 crc kubenswrapper[4765]: I0121 13:56:41.614675 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:56:41 crc kubenswrapper[4765]: E0121 13:56:41.615440 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:56:56 crc kubenswrapper[4765]: I0121 13:56:56.614164 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:56:56 crc kubenswrapper[4765]: E0121 13:56:56.614903 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:57:09 crc kubenswrapper[4765]: I0121 13:57:09.629307 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:57:09 crc kubenswrapper[4765]: E0121 13:57:09.630109 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:57:20 crc kubenswrapper[4765]: I0121 13:57:20.613636 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:57:20 crc kubenswrapper[4765]: E0121 13:57:20.614461 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:57:35 crc kubenswrapper[4765]: I0121 13:57:35.614962 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:57:35 crc kubenswrapper[4765]: E0121 13:57:35.615775 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 13:57:49 crc kubenswrapper[4765]: I0121 13:57:49.623109 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 13:57:50 crc kubenswrapper[4765]: I0121 13:57:50.299834 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"123c2df4d0b298a94771f8fe32d86827f1ad185563334945bac4e807eabfc67b"} Jan 21 14:00:00 crc kubenswrapper[4765]: I0121 14:00:00.172149 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm"] Jan 21 14:00:00 crc kubenswrapper[4765]: I0121 14:00:00.174026 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" Jan 21 14:00:00 crc kubenswrapper[4765]: I0121 14:00:00.176053 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 14:00:00 crc kubenswrapper[4765]: I0121 14:00:00.177654 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 14:00:00 crc kubenswrapper[4765]: I0121 14:00:00.226231 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm"] Jan 21 14:00:00 crc kubenswrapper[4765]: I0121 14:00:00.229294 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khkpf\" (UniqueName: \"kubernetes.io/projected/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-kube-api-access-khkpf\") pod \"collect-profiles-29483400-vl6bm\" (UID: \"cf769d7c-2a2d-4217-9e88-50cdd5d52ced\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" Jan 21 14:00:00 crc kubenswrapper[4765]: I0121 14:00:00.229337 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-config-volume\") pod \"collect-profiles-29483400-vl6bm\" (UID: \"cf769d7c-2a2d-4217-9e88-50cdd5d52ced\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" Jan 21 14:00:00 crc kubenswrapper[4765]: I0121 14:00:00.229557 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-secret-volume\") pod \"collect-profiles-29483400-vl6bm\" (UID: \"cf769d7c-2a2d-4217-9e88-50cdd5d52ced\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" Jan 21 14:00:00 crc kubenswrapper[4765]: I0121 14:00:00.330916 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khkpf\" (UniqueName: \"kubernetes.io/projected/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-kube-api-access-khkpf\") pod \"collect-profiles-29483400-vl6bm\" (UID: \"cf769d7c-2a2d-4217-9e88-50cdd5d52ced\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" Jan 21 14:00:00 crc kubenswrapper[4765]: I0121 14:00:00.331172 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-config-volume\") pod \"collect-profiles-29483400-vl6bm\" (UID: \"cf769d7c-2a2d-4217-9e88-50cdd5d52ced\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" Jan 21 14:00:00 crc kubenswrapper[4765]: I0121 14:00:00.331247 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-secret-volume\") pod \"collect-profiles-29483400-vl6bm\" (UID: \"cf769d7c-2a2d-4217-9e88-50cdd5d52ced\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" Jan 21 14:00:00 crc kubenswrapper[4765]: I0121 14:00:00.332020 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-config-volume\") pod \"collect-profiles-29483400-vl6bm\" (UID: \"cf769d7c-2a2d-4217-9e88-50cdd5d52ced\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" Jan 21 14:00:00 crc kubenswrapper[4765]: I0121 14:00:00.337394 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-secret-volume\") pod \"collect-profiles-29483400-vl6bm\" (UID: \"cf769d7c-2a2d-4217-9e88-50cdd5d52ced\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" Jan 21 14:00:00 crc kubenswrapper[4765]: I0121 14:00:00.349168 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khkpf\" (UniqueName: \"kubernetes.io/projected/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-kube-api-access-khkpf\") pod \"collect-profiles-29483400-vl6bm\" (UID: \"cf769d7c-2a2d-4217-9e88-50cdd5d52ced\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" Jan 21 14:00:00 crc kubenswrapper[4765]: I0121 14:00:00.494633 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" Jan 21 14:00:01 crc kubenswrapper[4765]: I0121 14:00:01.272479 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm"] Jan 21 14:00:01 crc kubenswrapper[4765]: I0121 14:00:01.780447 4765 generic.go:334] "Generic (PLEG): container finished" podID="cf769d7c-2a2d-4217-9e88-50cdd5d52ced" containerID="ae8e28136be748b3a90fad4803a9471f738b9d4574b8906c59812d1bcd0c7d66" exitCode=0 Jan 21 14:00:01 crc kubenswrapper[4765]: I0121 14:00:01.780527 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" event={"ID":"cf769d7c-2a2d-4217-9e88-50cdd5d52ced","Type":"ContainerDied","Data":"ae8e28136be748b3a90fad4803a9471f738b9d4574b8906c59812d1bcd0c7d66"} Jan 21 14:00:01 crc kubenswrapper[4765]: I0121 14:00:01.780823 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" event={"ID":"cf769d7c-2a2d-4217-9e88-50cdd5d52ced","Type":"ContainerStarted","Data":"d0d84b0f325896e57b7c8580a024ce87c5ef0325cf1dea917c37d7b350a983cc"} Jan 21 14:00:03 crc kubenswrapper[4765]: I0121 14:00:03.228754 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" Jan 21 14:00:03 crc kubenswrapper[4765]: I0121 14:00:03.388257 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khkpf\" (UniqueName: \"kubernetes.io/projected/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-kube-api-access-khkpf\") pod \"cf769d7c-2a2d-4217-9e88-50cdd5d52ced\" (UID: \"cf769d7c-2a2d-4217-9e88-50cdd5d52ced\") " Jan 21 14:00:03 crc kubenswrapper[4765]: I0121 14:00:03.388309 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-config-volume\") pod \"cf769d7c-2a2d-4217-9e88-50cdd5d52ced\" (UID: \"cf769d7c-2a2d-4217-9e88-50cdd5d52ced\") " Jan 21 14:00:03 crc kubenswrapper[4765]: I0121 14:00:03.388354 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-secret-volume\") pod \"cf769d7c-2a2d-4217-9e88-50cdd5d52ced\" (UID: \"cf769d7c-2a2d-4217-9e88-50cdd5d52ced\") " Jan 21 14:00:03 crc kubenswrapper[4765]: I0121 14:00:03.389602 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-config-volume" (OuterVolumeSpecName: "config-volume") pod "cf769d7c-2a2d-4217-9e88-50cdd5d52ced" (UID: "cf769d7c-2a2d-4217-9e88-50cdd5d52ced"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:00:03 crc kubenswrapper[4765]: I0121 14:00:03.395701 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-kube-api-access-khkpf" (OuterVolumeSpecName: "kube-api-access-khkpf") pod "cf769d7c-2a2d-4217-9e88-50cdd5d52ced" (UID: "cf769d7c-2a2d-4217-9e88-50cdd5d52ced"). InnerVolumeSpecName "kube-api-access-khkpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:00:03 crc kubenswrapper[4765]: I0121 14:00:03.414803 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cf769d7c-2a2d-4217-9e88-50cdd5d52ced" (UID: "cf769d7c-2a2d-4217-9e88-50cdd5d52ced"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:00:03 crc kubenswrapper[4765]: I0121 14:00:03.490994 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khkpf\" (UniqueName: \"kubernetes.io/projected/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-kube-api-access-khkpf\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:03 crc kubenswrapper[4765]: I0121 14:00:03.491028 4765 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:03 crc kubenswrapper[4765]: I0121 14:00:03.491038 4765 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf769d7c-2a2d-4217-9e88-50cdd5d52ced-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:03 crc kubenswrapper[4765]: I0121 14:00:03.807235 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" event={"ID":"cf769d7c-2a2d-4217-9e88-50cdd5d52ced","Type":"ContainerDied","Data":"d0d84b0f325896e57b7c8580a024ce87c5ef0325cf1dea917c37d7b350a983cc"} Jan 21 14:00:03 crc kubenswrapper[4765]: I0121 14:00:03.807619 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0d84b0f325896e57b7c8580a024ce87c5ef0325cf1dea917c37d7b350a983cc" Jan 21 14:00:03 crc kubenswrapper[4765]: I0121 14:00:03.807433 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483400-vl6bm" Jan 21 14:00:04 crc kubenswrapper[4765]: I0121 14:00:04.335235 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9"] Jan 21 14:00:04 crc kubenswrapper[4765]: I0121 14:00:04.348659 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-nk7x9"] Jan 21 14:00:05 crc kubenswrapper[4765]: I0121 14:00:05.629245 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54c9740-b071-4064-873a-acf56bc89533" path="/var/lib/kubelet/pods/e54c9740-b071-4064-873a-acf56bc89533/volumes" Jan 21 14:00:14 crc kubenswrapper[4765]: I0121 14:00:14.445557 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:00:14 crc kubenswrapper[4765]: I0121 14:00:14.446169 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:00:36 crc kubenswrapper[4765]: I0121 14:00:36.099535 4765 generic.go:334] "Generic (PLEG): container finished" podID="65a8700b-dcb3-42d5-9655-61f2c977e9e2" containerID="e98b372174a04458dcbd814a8d6f7a7aea911c452dda84e8605217ee7153ff2a" exitCode=0 Jan 21 14:00:36 crc kubenswrapper[4765]: I0121 14:00:36.099625 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"65a8700b-dcb3-42d5-9655-61f2c977e9e2","Type":"ContainerDied","Data":"e98b372174a04458dcbd814a8d6f7a7aea911c452dda84e8605217ee7153ff2a"} Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.491014 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.593807 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-openstack-config-secret\") pod \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.593868 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-ssh-key\") pod \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.593905 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/65a8700b-dcb3-42d5-9655-61f2c977e9e2-test-operator-ephemeral-temporary\") pod \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.593937 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.593997 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/65a8700b-dcb3-42d5-9655-61f2c977e9e2-openstack-config\") pod \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.594659 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a8700b-dcb3-42d5-9655-61f2c977e9e2-config-data\") pod \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.594693 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/65a8700b-dcb3-42d5-9655-61f2c977e9e2-test-operator-ephemeral-workdir\") pod \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.594841 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxcrf\" (UniqueName: \"kubernetes.io/projected/65a8700b-dcb3-42d5-9655-61f2c977e9e2-kube-api-access-sxcrf\") pod \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.594938 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-ca-certs\") pod \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\" (UID: \"65a8700b-dcb3-42d5-9655-61f2c977e9e2\") " Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.595424 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a8700b-dcb3-42d5-9655-61f2c977e9e2-config-data" (OuterVolumeSpecName: "config-data") pod "65a8700b-dcb3-42d5-9655-61f2c977e9e2" (UID: "65a8700b-dcb3-42d5-9655-61f2c977e9e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.595834 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a8700b-dcb3-42d5-9655-61f2c977e9e2-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "65a8700b-dcb3-42d5-9655-61f2c977e9e2" (UID: "65a8700b-dcb3-42d5-9655-61f2c977e9e2"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.600404 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a8700b-dcb3-42d5-9655-61f2c977e9e2-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "65a8700b-dcb3-42d5-9655-61f2c977e9e2" (UID: "65a8700b-dcb3-42d5-9655-61f2c977e9e2"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.602623 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "65a8700b-dcb3-42d5-9655-61f2c977e9e2" (UID: "65a8700b-dcb3-42d5-9655-61f2c977e9e2"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.604203 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a8700b-dcb3-42d5-9655-61f2c977e9e2-kube-api-access-sxcrf" (OuterVolumeSpecName: "kube-api-access-sxcrf") pod "65a8700b-dcb3-42d5-9655-61f2c977e9e2" (UID: "65a8700b-dcb3-42d5-9655-61f2c977e9e2"). InnerVolumeSpecName "kube-api-access-sxcrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.629983 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "65a8700b-dcb3-42d5-9655-61f2c977e9e2" (UID: "65a8700b-dcb3-42d5-9655-61f2c977e9e2"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.646341 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "65a8700b-dcb3-42d5-9655-61f2c977e9e2" (UID: "65a8700b-dcb3-42d5-9655-61f2c977e9e2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.655948 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65a8700b-dcb3-42d5-9655-61f2c977e9e2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "65a8700b-dcb3-42d5-9655-61f2c977e9e2" (UID: "65a8700b-dcb3-42d5-9655-61f2c977e9e2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.662077 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "65a8700b-dcb3-42d5-9655-61f2c977e9e2" (UID: "65a8700b-dcb3-42d5-9655-61f2c977e9e2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.697141 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65a8700b-dcb3-42d5-9655-61f2c977e9e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.697186 4765 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/65a8700b-dcb3-42d5-9655-61f2c977e9e2-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.697203 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxcrf\" (UniqueName: \"kubernetes.io/projected/65a8700b-dcb3-42d5-9655-61f2c977e9e2-kube-api-access-sxcrf\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.697233 4765 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.697245 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.697255 4765 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/65a8700b-dcb3-42d5-9655-61f2c977e9e2-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.697263 4765 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65a8700b-dcb3-42d5-9655-61f2c977e9e2-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.697791 4765 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.697819 4765 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/65a8700b-dcb3-42d5-9655-61f2c977e9e2-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.724007 4765 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 14:00:37 crc kubenswrapper[4765]: I0121 14:00:37.799339 4765 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 14:00:38 crc kubenswrapper[4765]: I0121 14:00:38.119686 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"65a8700b-dcb3-42d5-9655-61f2c977e9e2","Type":"ContainerDied","Data":"1c8604ba9b8192b80f2fc07c4a36bec339fa10802e3e0692fac189a4bd0029db"} Jan 21 14:00:38 crc kubenswrapper[4765]: I0121 14:00:38.120088 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c8604ba9b8192b80f2fc07c4a36bec339fa10802e3e0692fac189a4bd0029db" Jan 21 14:00:38 crc kubenswrapper[4765]: I0121 14:00:38.119826 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 14:00:38 crc kubenswrapper[4765]: I0121 14:00:38.543761 4765 scope.go:117] "RemoveContainer" containerID="2b78061ca4f0f9dd357a1044a304c871c5d75b92a1cdb2649d1a6dd6d6addf60" Jan 21 14:00:40 crc kubenswrapper[4765]: I0121 14:00:40.787624 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 14:00:40 crc kubenswrapper[4765]: E0121 14:00:40.789285 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf769d7c-2a2d-4217-9e88-50cdd5d52ced" containerName="collect-profiles" Jan 21 14:00:40 crc kubenswrapper[4765]: I0121 14:00:40.789404 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf769d7c-2a2d-4217-9e88-50cdd5d52ced" containerName="collect-profiles" Jan 21 14:00:40 crc kubenswrapper[4765]: E0121 14:00:40.789493 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a8700b-dcb3-42d5-9655-61f2c977e9e2" containerName="tempest-tests-tempest-tests-runner" Jan 21 14:00:40 crc kubenswrapper[4765]: I0121 14:00:40.789552 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a8700b-dcb3-42d5-9655-61f2c977e9e2" containerName="tempest-tests-tempest-tests-runner" Jan 21 14:00:40 crc kubenswrapper[4765]: I0121 14:00:40.789824 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf769d7c-2a2d-4217-9e88-50cdd5d52ced" containerName="collect-profiles" Jan 21 14:00:40 crc kubenswrapper[4765]: I0121 14:00:40.789927 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a8700b-dcb3-42d5-9655-61f2c977e9e2" containerName="tempest-tests-tempest-tests-runner" Jan 21 14:00:40 crc kubenswrapper[4765]: I0121 14:00:40.790637 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:00:40 crc kubenswrapper[4765]: I0121 14:00:40.796026 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qj64x" Jan 21 14:00:40 crc kubenswrapper[4765]: I0121 14:00:40.798945 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 14:00:40 crc kubenswrapper[4765]: I0121 14:00:40.973246 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9t58\" (UniqueName: \"kubernetes.io/projected/30dbae35-d4af-4e14-831b-3c17f0e66a0c-kube-api-access-k9t58\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"30dbae35-d4af-4e14-831b-3c17f0e66a0c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:00:40 crc kubenswrapper[4765]: I0121 14:00:40.973448 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"30dbae35-d4af-4e14-831b-3c17f0e66a0c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:00:41 crc kubenswrapper[4765]: I0121 14:00:41.075562 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"30dbae35-d4af-4e14-831b-3c17f0e66a0c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:00:41 crc kubenswrapper[4765]: I0121 14:00:41.075659 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9t58\" (UniqueName: \"kubernetes.io/projected/30dbae35-d4af-4e14-831b-3c17f0e66a0c-kube-api-access-k9t58\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"30dbae35-d4af-4e14-831b-3c17f0e66a0c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:00:41 crc kubenswrapper[4765]: I0121 14:00:41.076113 4765 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"30dbae35-d4af-4e14-831b-3c17f0e66a0c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:00:41 crc kubenswrapper[4765]: I0121 14:00:41.103161 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9t58\" (UniqueName: \"kubernetes.io/projected/30dbae35-d4af-4e14-831b-3c17f0e66a0c-kube-api-access-k9t58\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"30dbae35-d4af-4e14-831b-3c17f0e66a0c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:00:41 crc kubenswrapper[4765]: I0121 14:00:41.110976 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"30dbae35-d4af-4e14-831b-3c17f0e66a0c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:00:41 crc kubenswrapper[4765]: I0121 14:00:41.407784 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 14:00:41 crc kubenswrapper[4765]: I0121 14:00:41.855258 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 14:00:41 crc kubenswrapper[4765]: I0121 14:00:41.870575 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:00:42 crc kubenswrapper[4765]: I0121 14:00:42.164499 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"30dbae35-d4af-4e14-831b-3c17f0e66a0c","Type":"ContainerStarted","Data":"5f84575168e16b374c6496301b35f944ac57092fb31daf8945a5a456a2197a3b"} Jan 21 14:00:43 crc kubenswrapper[4765]: I0121 14:00:43.174338 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"30dbae35-d4af-4e14-831b-3c17f0e66a0c","Type":"ContainerStarted","Data":"b0ff42c482eb4e31a3edb716610b1af4223ae8534e44eebc8bba5f08c7cc0d15"} Jan 21 14:00:43 crc kubenswrapper[4765]: I0121 14:00:43.196815 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.362628432 podStartE2EDuration="3.196795391s" podCreationTimestamp="2026-01-21 14:00:40 +0000 UTC" firstStartedPulling="2026-01-21 14:00:41.870318309 +0000 UTC m=+3502.888044131" lastFinishedPulling="2026-01-21 14:00:42.704485268 +0000 UTC m=+3503.722211090" observedRunningTime="2026-01-21 14:00:43.189487592 +0000 UTC m=+3504.207213414" watchObservedRunningTime="2026-01-21 14:00:43.196795391 +0000 UTC m=+3504.214521213" Jan 21 14:00:44 crc kubenswrapper[4765]: I0121 14:00:44.445804 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:00:44 crc kubenswrapper[4765]: I0121 14:00:44.445868 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.146743 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483401-4rwzk"] Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.148947 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.199841 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483401-4rwzk"] Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.306613 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6mpp\" (UniqueName: \"kubernetes.io/projected/dac68597-6a74-41ae-987b-e6968ab9931d-kube-api-access-g6mpp\") pod \"keystone-cron-29483401-4rwzk\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.306670 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-combined-ca-bundle\") pod \"keystone-cron-29483401-4rwzk\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.306935 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-fernet-keys\") pod \"keystone-cron-29483401-4rwzk\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.307241 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-config-data\") pod \"keystone-cron-29483401-4rwzk\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.408954 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-combined-ca-bundle\") pod \"keystone-cron-29483401-4rwzk\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.409421 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-fernet-keys\") pod \"keystone-cron-29483401-4rwzk\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.409555 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-config-data\") pod \"keystone-cron-29483401-4rwzk\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.409624 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6mpp\" (UniqueName: \"kubernetes.io/projected/dac68597-6a74-41ae-987b-e6968ab9931d-kube-api-access-g6mpp\") pod \"keystone-cron-29483401-4rwzk\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.422670 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-fernet-keys\") pod \"keystone-cron-29483401-4rwzk\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.422978 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-config-data\") pod \"keystone-cron-29483401-4rwzk\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.424012 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-combined-ca-bundle\") pod \"keystone-cron-29483401-4rwzk\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.426237 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6mpp\" (UniqueName: \"kubernetes.io/projected/dac68597-6a74-41ae-987b-e6968ab9931d-kube-api-access-g6mpp\") pod \"keystone-cron-29483401-4rwzk\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.475179 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:00 crc kubenswrapper[4765]: I0121 14:01:00.971239 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483401-4rwzk"] Jan 21 14:01:01 crc kubenswrapper[4765]: I0121 14:01:01.368272 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483401-4rwzk" event={"ID":"dac68597-6a74-41ae-987b-e6968ab9931d","Type":"ContainerStarted","Data":"197b7d7f1daf2a11b6c3f8c9b20139f2aa7352ddb5229ee8ced7f75c64052a1a"} Jan 21 14:01:01 crc kubenswrapper[4765]: I0121 14:01:01.368619 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483401-4rwzk" event={"ID":"dac68597-6a74-41ae-987b-e6968ab9931d","Type":"ContainerStarted","Data":"9a9d4c9e4a36bd15fdb702bd22c2c19a4dc82b948ddb5820e9fe2ad0640002cc"} Jan 21 14:01:01 crc kubenswrapper[4765]: I0121 14:01:01.403942 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29483401-4rwzk" podStartSLOduration=1.403915541 podStartE2EDuration="1.403915541s" podCreationTimestamp="2026-01-21 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:01:01.393480661 +0000 UTC m=+3522.411206483" watchObservedRunningTime="2026-01-21 14:01:01.403915541 +0000 UTC m=+3522.421641383" Jan 21 14:01:04 crc kubenswrapper[4765]: I0121 14:01:04.393943 4765 generic.go:334] "Generic (PLEG): container finished" podID="dac68597-6a74-41ae-987b-e6968ab9931d" containerID="197b7d7f1daf2a11b6c3f8c9b20139f2aa7352ddb5229ee8ced7f75c64052a1a" exitCode=0 Jan 21 14:01:04 crc kubenswrapper[4765]: I0121 14:01:04.394132 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483401-4rwzk" event={"ID":"dac68597-6a74-41ae-987b-e6968ab9931d","Type":"ContainerDied","Data":"197b7d7f1daf2a11b6c3f8c9b20139f2aa7352ddb5229ee8ced7f75c64052a1a"} Jan 21 14:01:05 crc kubenswrapper[4765]: I0121 14:01:05.712333 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:05 crc kubenswrapper[4765]: I0121 14:01:05.851939 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-combined-ca-bundle\") pod \"dac68597-6a74-41ae-987b-e6968ab9931d\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " Jan 21 14:01:05 crc kubenswrapper[4765]: I0121 14:01:05.852179 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6mpp\" (UniqueName: \"kubernetes.io/projected/dac68597-6a74-41ae-987b-e6968ab9931d-kube-api-access-g6mpp\") pod \"dac68597-6a74-41ae-987b-e6968ab9931d\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " Jan 21 14:01:05 crc kubenswrapper[4765]: I0121 14:01:05.852205 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-config-data\") pod \"dac68597-6a74-41ae-987b-e6968ab9931d\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " Jan 21 14:01:05 crc kubenswrapper[4765]: I0121 14:01:05.852269 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-fernet-keys\") pod \"dac68597-6a74-41ae-987b-e6968ab9931d\" (UID: \"dac68597-6a74-41ae-987b-e6968ab9931d\") " Jan 21 14:01:05 crc kubenswrapper[4765]: I0121 14:01:05.858393 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac68597-6a74-41ae-987b-e6968ab9931d-kube-api-access-g6mpp" (OuterVolumeSpecName: "kube-api-access-g6mpp") pod "dac68597-6a74-41ae-987b-e6968ab9931d" (UID: "dac68597-6a74-41ae-987b-e6968ab9931d"). InnerVolumeSpecName "kube-api-access-g6mpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:01:05 crc kubenswrapper[4765]: I0121 14:01:05.866675 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dac68597-6a74-41ae-987b-e6968ab9931d" (UID: "dac68597-6a74-41ae-987b-e6968ab9931d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:01:05 crc kubenswrapper[4765]: I0121 14:01:05.886408 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dac68597-6a74-41ae-987b-e6968ab9931d" (UID: "dac68597-6a74-41ae-987b-e6968ab9931d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:01:05 crc kubenswrapper[4765]: I0121 14:01:05.912519 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-config-data" (OuterVolumeSpecName: "config-data") pod "dac68597-6a74-41ae-987b-e6968ab9931d" (UID: "dac68597-6a74-41ae-987b-e6968ab9931d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:01:05 crc kubenswrapper[4765]: I0121 14:01:05.955962 4765 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 14:01:05 crc kubenswrapper[4765]: I0121 14:01:05.956302 4765 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 14:01:05 crc kubenswrapper[4765]: I0121 14:01:05.956319 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6mpp\" (UniqueName: \"kubernetes.io/projected/dac68597-6a74-41ae-987b-e6968ab9931d-kube-api-access-g6mpp\") on node \"crc\" DevicePath \"\"" Jan 21 14:01:05 crc kubenswrapper[4765]: I0121 14:01:05.956331 4765 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dac68597-6a74-41ae-987b-e6968ab9931d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 14:01:06 crc kubenswrapper[4765]: I0121 14:01:06.415307 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483401-4rwzk" event={"ID":"dac68597-6a74-41ae-987b-e6968ab9931d","Type":"ContainerDied","Data":"9a9d4c9e4a36bd15fdb702bd22c2c19a4dc82b948ddb5820e9fe2ad0640002cc"} Jan 21 14:01:06 crc kubenswrapper[4765]: I0121 14:01:06.415360 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a9d4c9e4a36bd15fdb702bd22c2c19a4dc82b948ddb5820e9fe2ad0640002cc" Jan 21 14:01:06 crc kubenswrapper[4765]: I0121 14:01:06.415415 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483401-4rwzk" Jan 21 14:01:11 crc kubenswrapper[4765]: I0121 14:01:11.264851 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fjdnn/must-gather-k5mtw"] Jan 21 14:01:11 crc kubenswrapper[4765]: E0121 14:01:11.265611 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac68597-6a74-41ae-987b-e6968ab9931d" containerName="keystone-cron" Jan 21 14:01:11 crc kubenswrapper[4765]: I0121 14:01:11.265623 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac68597-6a74-41ae-987b-e6968ab9931d" containerName="keystone-cron" Jan 21 14:01:11 crc kubenswrapper[4765]: I0121 14:01:11.265818 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac68597-6a74-41ae-987b-e6968ab9931d" containerName="keystone-cron" Jan 21 14:01:11 crc kubenswrapper[4765]: I0121 14:01:11.266763 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/must-gather-k5mtw" Jan 21 14:01:11 crc kubenswrapper[4765]: I0121 14:01:11.281145 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fjdnn/must-gather-k5mtw"] Jan 21 14:01:11 crc kubenswrapper[4765]: I0121 14:01:11.281550 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fjdnn"/"openshift-service-ca.crt" Jan 21 14:01:11 crc kubenswrapper[4765]: I0121 14:01:11.293579 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fjdnn"/"kube-root-ca.crt" Jan 21 14:01:11 crc kubenswrapper[4765]: I0121 14:01:11.465669 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg5sm\" (UniqueName: \"kubernetes.io/projected/f2dc91b5-0f41-4899-90c9-e0dcab80e4d8-kube-api-access-qg5sm\") pod \"must-gather-k5mtw\" (UID: \"f2dc91b5-0f41-4899-90c9-e0dcab80e4d8\") " pod="openshift-must-gather-fjdnn/must-gather-k5mtw" Jan 21 14:01:11 crc kubenswrapper[4765]: I0121 14:01:11.466069 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2dc91b5-0f41-4899-90c9-e0dcab80e4d8-must-gather-output\") pod \"must-gather-k5mtw\" (UID: \"f2dc91b5-0f41-4899-90c9-e0dcab80e4d8\") " pod="openshift-must-gather-fjdnn/must-gather-k5mtw" Jan 21 14:01:11 crc kubenswrapper[4765]: I0121 14:01:11.567851 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg5sm\" (UniqueName: \"kubernetes.io/projected/f2dc91b5-0f41-4899-90c9-e0dcab80e4d8-kube-api-access-qg5sm\") pod \"must-gather-k5mtw\" (UID: \"f2dc91b5-0f41-4899-90c9-e0dcab80e4d8\") " pod="openshift-must-gather-fjdnn/must-gather-k5mtw" Jan 21 14:01:11 crc kubenswrapper[4765]: I0121 14:01:11.568158 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2dc91b5-0f41-4899-90c9-e0dcab80e4d8-must-gather-output\") pod \"must-gather-k5mtw\" (UID: \"f2dc91b5-0f41-4899-90c9-e0dcab80e4d8\") " pod="openshift-must-gather-fjdnn/must-gather-k5mtw" Jan 21 14:01:11 crc kubenswrapper[4765]: I0121 14:01:11.568668 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2dc91b5-0f41-4899-90c9-e0dcab80e4d8-must-gather-output\") pod \"must-gather-k5mtw\" (UID: \"f2dc91b5-0f41-4899-90c9-e0dcab80e4d8\") " pod="openshift-must-gather-fjdnn/must-gather-k5mtw" Jan 21 14:01:11 crc kubenswrapper[4765]: I0121 14:01:11.588793 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg5sm\" (UniqueName: \"kubernetes.io/projected/f2dc91b5-0f41-4899-90c9-e0dcab80e4d8-kube-api-access-qg5sm\") pod \"must-gather-k5mtw\" (UID: \"f2dc91b5-0f41-4899-90c9-e0dcab80e4d8\") " pod="openshift-must-gather-fjdnn/must-gather-k5mtw" Jan 21 14:01:11 crc kubenswrapper[4765]: I0121 14:01:11.599885 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/must-gather-k5mtw" Jan 21 14:01:12 crc kubenswrapper[4765]: I0121 14:01:12.067952 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fjdnn/must-gather-k5mtw"] Jan 21 14:01:12 crc kubenswrapper[4765]: I0121 14:01:12.462546 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fjdnn/must-gather-k5mtw" event={"ID":"f2dc91b5-0f41-4899-90c9-e0dcab80e4d8","Type":"ContainerStarted","Data":"62431980901d8dfe250bc222d159774f877e0483e81ffc7ad381471e77ad541c"} Jan 21 14:01:14 crc kubenswrapper[4765]: I0121 14:01:14.445999 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:01:14 crc kubenswrapper[4765]: I0121 14:01:14.446374 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:01:14 crc kubenswrapper[4765]: I0121 14:01:14.446419 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 14:01:14 crc kubenswrapper[4765]: I0121 14:01:14.447175 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"123c2df4d0b298a94771f8fe32d86827f1ad185563334945bac4e807eabfc67b"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:01:14 crc kubenswrapper[4765]: I0121 14:01:14.447304 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://123c2df4d0b298a94771f8fe32d86827f1ad185563334945bac4e807eabfc67b" gracePeriod=600 Jan 21 14:01:15 crc kubenswrapper[4765]: I0121 14:01:15.505108 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="123c2df4d0b298a94771f8fe32d86827f1ad185563334945bac4e807eabfc67b" exitCode=0 Jan 21 14:01:15 crc kubenswrapper[4765]: I0121 14:01:15.505188 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"123c2df4d0b298a94771f8fe32d86827f1ad185563334945bac4e807eabfc67b"} Jan 21 14:01:15 crc kubenswrapper[4765]: I0121 14:01:15.505653 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60"} Jan 21 14:01:15 crc kubenswrapper[4765]: I0121 14:01:15.505687 4765 scope.go:117] "RemoveContainer" containerID="1f4a2b300d3931d52b33ffd8b534667773730d7c097634178482664222fac217" Jan 21 14:01:16 crc kubenswrapper[4765]: I0121 14:01:16.255442 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4qhvb"] Jan 21 14:01:16 crc kubenswrapper[4765]: I0121 14:01:16.257809 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:16 crc kubenswrapper[4765]: I0121 14:01:16.279705 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qhvb"] Jan 21 14:01:16 crc kubenswrapper[4765]: I0121 14:01:16.360896 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-utilities\") pod \"redhat-marketplace-4qhvb\" (UID: \"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400\") " pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:16 crc kubenswrapper[4765]: I0121 14:01:16.361007 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dclrn\" (UniqueName: \"kubernetes.io/projected/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-kube-api-access-dclrn\") pod \"redhat-marketplace-4qhvb\" (UID: \"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400\") " pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:16 crc kubenswrapper[4765]: I0121 14:01:16.361062 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-catalog-content\") pod \"redhat-marketplace-4qhvb\" (UID: \"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400\") " pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:16 crc kubenswrapper[4765]: I0121 14:01:16.462568 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dclrn\" (UniqueName: \"kubernetes.io/projected/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-kube-api-access-dclrn\") pod \"redhat-marketplace-4qhvb\" (UID: \"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400\") " pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:16 crc kubenswrapper[4765]: I0121 14:01:16.462642 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-catalog-content\") pod \"redhat-marketplace-4qhvb\" (UID: \"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400\") " pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:16 crc kubenswrapper[4765]: I0121 14:01:16.462705 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-utilities\") pod \"redhat-marketplace-4qhvb\" (UID: \"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400\") " pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:16 crc kubenswrapper[4765]: I0121 14:01:16.463067 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-utilities\") pod \"redhat-marketplace-4qhvb\" (UID: \"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400\") " pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:16 crc kubenswrapper[4765]: I0121 14:01:16.463600 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-catalog-content\") pod \"redhat-marketplace-4qhvb\" (UID: \"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400\") " pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:16 crc kubenswrapper[4765]: I0121 14:01:16.488105 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dclrn\" (UniqueName: \"kubernetes.io/projected/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-kube-api-access-dclrn\") pod \"redhat-marketplace-4qhvb\" (UID: \"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400\") " pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:16 crc kubenswrapper[4765]: I0121 14:01:16.579636 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:21 crc kubenswrapper[4765]: I0121 14:01:21.424293 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qhvb"] Jan 21 14:01:21 crc kubenswrapper[4765]: W0121 14:01:21.485541 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf3cbb95_6b2e_41b7_bfe8_3cbb9935c400.slice/crio-a2bffe13e0105351dea7218fdc7fed0a8c85a548595688265d055353ded673ff WatchSource:0}: Error finding container a2bffe13e0105351dea7218fdc7fed0a8c85a548595688265d055353ded673ff: Status 404 returned error can't find the container with id a2bffe13e0105351dea7218fdc7fed0a8c85a548595688265d055353ded673ff Jan 21 14:01:21 crc kubenswrapper[4765]: I0121 14:01:21.623232 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qhvb" event={"ID":"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400","Type":"ContainerStarted","Data":"a2bffe13e0105351dea7218fdc7fed0a8c85a548595688265d055353ded673ff"} Jan 21 14:01:21 crc kubenswrapper[4765]: I0121 14:01:21.623508 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fjdnn/must-gather-k5mtw" event={"ID":"f2dc91b5-0f41-4899-90c9-e0dcab80e4d8","Type":"ContainerStarted","Data":"17a810fdacc021ad0ab0645d1207a65f40c0a14c2de5221844979f563062fc03"} Jan 21 14:01:21 crc kubenswrapper[4765]: I0121 14:01:21.623629 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fjdnn/must-gather-k5mtw" event={"ID":"f2dc91b5-0f41-4899-90c9-e0dcab80e4d8","Type":"ContainerStarted","Data":"e311900b3468a4f0f64592bf9989a203a47d4c97b1df9c61af96d9f3cc861dc8"} Jan 21 14:01:21 crc kubenswrapper[4765]: I0121 14:01:21.651524 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fjdnn/must-gather-k5mtw" podStartSLOduration=1.72950421 podStartE2EDuration="10.651504569s" podCreationTimestamp="2026-01-21 14:01:11 +0000 UTC" firstStartedPulling="2026-01-21 14:01:12.068714934 +0000 UTC m=+3533.086440756" lastFinishedPulling="2026-01-21 14:01:20.990715283 +0000 UTC m=+3542.008441115" observedRunningTime="2026-01-21 14:01:21.643368696 +0000 UTC m=+3542.661094518" watchObservedRunningTime="2026-01-21 14:01:21.651504569 +0000 UTC m=+3542.669230391" Jan 21 14:01:22 crc kubenswrapper[4765]: I0121 14:01:22.631839 4765 generic.go:334] "Generic (PLEG): container finished" podID="bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400" containerID="744d697a4bfa41888cc91de93754e1999fe668a54d244c45adfbf04a931967a4" exitCode=0 Jan 21 14:01:22 crc kubenswrapper[4765]: I0121 14:01:22.631936 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qhvb" event={"ID":"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400","Type":"ContainerDied","Data":"744d697a4bfa41888cc91de93754e1999fe668a54d244c45adfbf04a931967a4"} Jan 21 14:01:24 crc kubenswrapper[4765]: I0121 14:01:24.650529 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qhvb" event={"ID":"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400","Type":"ContainerStarted","Data":"9fc1c67b480decd41e64cfb0a290201cba78b25c6104620c3abd1615c1b911de"} Jan 21 14:01:25 crc kubenswrapper[4765]: I0121 14:01:25.672428 4765 generic.go:334] "Generic (PLEG): container finished" podID="bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400" containerID="9fc1c67b480decd41e64cfb0a290201cba78b25c6104620c3abd1615c1b911de" exitCode=0 Jan 21 14:01:25 crc kubenswrapper[4765]: I0121 14:01:25.672533 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qhvb" event={"ID":"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400","Type":"ContainerDied","Data":"9fc1c67b480decd41e64cfb0a290201cba78b25c6104620c3abd1615c1b911de"} Jan 21 14:01:25 crc kubenswrapper[4765]: I0121 14:01:25.983387 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fjdnn/crc-debug-fk5cj"] Jan 21 14:01:25 crc kubenswrapper[4765]: I0121 14:01:25.985375 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/crc-debug-fk5cj" Jan 21 14:01:25 crc kubenswrapper[4765]: I0121 14:01:25.998648 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-fjdnn"/"default-dockercfg-9cx69" Jan 21 14:01:26 crc kubenswrapper[4765]: I0121 14:01:26.045381 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxv6f\" (UniqueName: \"kubernetes.io/projected/5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7-kube-api-access-gxv6f\") pod \"crc-debug-fk5cj\" (UID: \"5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7\") " pod="openshift-must-gather-fjdnn/crc-debug-fk5cj" Jan 21 14:01:26 crc kubenswrapper[4765]: I0121 14:01:26.045528 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7-host\") pod \"crc-debug-fk5cj\" (UID: \"5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7\") " pod="openshift-must-gather-fjdnn/crc-debug-fk5cj" Jan 21 14:01:26 crc kubenswrapper[4765]: I0121 14:01:26.147571 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7-host\") pod \"crc-debug-fk5cj\" (UID: \"5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7\") " pod="openshift-must-gather-fjdnn/crc-debug-fk5cj" Jan 21 14:01:26 crc kubenswrapper[4765]: I0121 14:01:26.147710 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxv6f\" (UniqueName: \"kubernetes.io/projected/5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7-kube-api-access-gxv6f\") pod \"crc-debug-fk5cj\" (UID: \"5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7\") " pod="openshift-must-gather-fjdnn/crc-debug-fk5cj" Jan 21 14:01:26 crc kubenswrapper[4765]: I0121 14:01:26.147745 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7-host\") pod \"crc-debug-fk5cj\" (UID: \"5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7\") " pod="openshift-must-gather-fjdnn/crc-debug-fk5cj" Jan 21 14:01:26 crc kubenswrapper[4765]: I0121 14:01:26.168482 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxv6f\" (UniqueName: \"kubernetes.io/projected/5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7-kube-api-access-gxv6f\") pod \"crc-debug-fk5cj\" (UID: \"5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7\") " pod="openshift-must-gather-fjdnn/crc-debug-fk5cj" Jan 21 14:01:26 crc kubenswrapper[4765]: I0121 14:01:26.328079 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/crc-debug-fk5cj" Jan 21 14:01:26 crc kubenswrapper[4765]: W0121 14:01:26.370569 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c44ecbb_39c0_468d_b46b_9bc6bdc14bd7.slice/crio-1de2deb66ef2f2c56cc07ac30d0964a646be5433e4662a40733e5870eeed06f2 WatchSource:0}: Error finding container 1de2deb66ef2f2c56cc07ac30d0964a646be5433e4662a40733e5870eeed06f2: Status 404 returned error can't find the container with id 1de2deb66ef2f2c56cc07ac30d0964a646be5433e4662a40733e5870eeed06f2 Jan 21 14:01:26 crc kubenswrapper[4765]: I0121 14:01:26.686953 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qhvb" event={"ID":"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400","Type":"ContainerStarted","Data":"56ed793ab11be9050cbe78427cba1189ca0a3aaaec3d81ca545777d9d0391acd"} Jan 21 14:01:26 crc kubenswrapper[4765]: I0121 14:01:26.688671 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fjdnn/crc-debug-fk5cj" event={"ID":"5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7","Type":"ContainerStarted","Data":"1de2deb66ef2f2c56cc07ac30d0964a646be5433e4662a40733e5870eeed06f2"} Jan 21 14:01:26 crc kubenswrapper[4765]: I0121 14:01:26.714611 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4qhvb" podStartSLOduration=6.940673902 podStartE2EDuration="10.714593935s" podCreationTimestamp="2026-01-21 14:01:16 +0000 UTC" firstStartedPulling="2026-01-21 14:01:22.633929962 +0000 UTC m=+3543.651655784" lastFinishedPulling="2026-01-21 14:01:26.407849995 +0000 UTC m=+3547.425575817" observedRunningTime="2026-01-21 14:01:26.712631459 +0000 UTC m=+3547.730357271" watchObservedRunningTime="2026-01-21 14:01:26.714593935 +0000 UTC m=+3547.732319757" Jan 21 14:01:30 crc kubenswrapper[4765]: I0121 14:01:30.904899 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6ccc6775fd-qhnc2_4424b63d-0688-473e-80e8-8cd4148911a1/barbican-api-log/0.log" Jan 21 14:01:30 crc kubenswrapper[4765]: I0121 14:01:30.920069 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6ccc6775fd-qhnc2_4424b63d-0688-473e-80e8-8cd4148911a1/barbican-api/0.log" Jan 21 14:01:31 crc kubenswrapper[4765]: I0121 14:01:31.062139 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7fd49c47b6-4hvtg_8aca8cf8-41b9-44a4-8948-94717695f201/barbican-keystone-listener-log/0.log" Jan 21 14:01:31 crc kubenswrapper[4765]: I0121 14:01:31.092490 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7fd49c47b6-4hvtg_8aca8cf8-41b9-44a4-8948-94717695f201/barbican-keystone-listener/0.log" Jan 21 14:01:31 crc kubenswrapper[4765]: I0121 14:01:31.297098 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-667d97cc75-tm9lv_d9390565-b433-4d8e-a112-7f7539cbdc3e/barbican-worker-log/0.log" Jan 21 14:01:31 crc kubenswrapper[4765]: I0121 14:01:31.304202 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-667d97cc75-tm9lv_d9390565-b433-4d8e-a112-7f7539cbdc3e/barbican-worker/0.log" Jan 21 14:01:31 crc kubenswrapper[4765]: I0121 14:01:31.390560 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2_244e5c68-a93a-44e7-a8fd-d4368ee754bd/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:01:31 crc kubenswrapper[4765]: I0121 14:01:31.452976 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e149475f-fb59-4dd4-92f6-d83b29234528/ceilometer-central-agent/0.log" Jan 21 14:01:31 crc kubenswrapper[4765]: I0121 14:01:31.481899 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e149475f-fb59-4dd4-92f6-d83b29234528/ceilometer-notification-agent/0.log" Jan 21 14:01:31 crc kubenswrapper[4765]: I0121 14:01:31.491144 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e149475f-fb59-4dd4-92f6-d83b29234528/sg-core/0.log" Jan 21 14:01:31 crc kubenswrapper[4765]: I0121 14:01:31.530189 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e149475f-fb59-4dd4-92f6-d83b29234528/proxy-httpd/0.log" Jan 21 14:01:31 crc kubenswrapper[4765]: I0121 14:01:31.548795 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264/cinder-api-log/0.log" Jan 21 14:01:31 crc kubenswrapper[4765]: I0121 14:01:31.607559 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264/cinder-api/0.log" Jan 21 14:01:31 crc kubenswrapper[4765]: I0121 14:01:31.677503 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9d8e00dc-cddb-4ae9-a128-684e2ca459f7/cinder-scheduler/0.log" Jan 21 14:01:31 crc kubenswrapper[4765]: I0121 14:01:31.877671 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9d8e00dc-cddb-4ae9-a128-684e2ca459f7/probe/0.log" Jan 21 14:01:31 crc kubenswrapper[4765]: I0121 14:01:31.912526 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-s77bh_9a6275ee-1fe3-407a-b438-a189ac6b3241/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:01:31 crc kubenswrapper[4765]: I0121 14:01:31.956944 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vhz72_b30a7ddd-acca-4134-8807-675f980b4a4b/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:01:32 crc kubenswrapper[4765]: I0121 14:01:32.029737 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-sh9vb_8b82d059-d861-40e4-8892-ba17220d1b78/dnsmasq-dns/0.log" Jan 21 14:01:32 crc kubenswrapper[4765]: I0121 14:01:32.040255 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-sh9vb_8b82d059-d861-40e4-8892-ba17220d1b78/init/0.log" Jan 21 14:01:32 crc kubenswrapper[4765]: I0121 14:01:32.085015 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6_1c7356a7-bab7-4123-9f98-a484d751e8e7/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:01:32 crc kubenswrapper[4765]: I0121 14:01:32.111490 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_165f5e89-08b4-465c-acc6-52d76f9c0db0/glance-log/0.log" Jan 21 14:01:32 crc kubenswrapper[4765]: I0121 14:01:32.139092 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_165f5e89-08b4-465c-acc6-52d76f9c0db0/glance-httpd/0.log" Jan 21 14:01:32 crc kubenswrapper[4765]: I0121 14:01:32.152071 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_85a4c5bc-cacf-4c49-b285-295c9bfb7b74/glance-log/0.log" Jan 21 14:01:32 crc kubenswrapper[4765]: I0121 14:01:32.173067 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_85a4c5bc-cacf-4c49-b285-295c9bfb7b74/glance-httpd/0.log" Jan 21 14:01:32 crc kubenswrapper[4765]: I0121 14:01:32.532942 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86c57777f6-gqpgv_1241b1f0-34c1-401a-b91f-13b72926cc2c/horizon-log/0.log" Jan 21 14:01:32 crc kubenswrapper[4765]: I0121 14:01:32.699091 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86c57777f6-gqpgv_1241b1f0-34c1-401a-b91f-13b72926cc2c/horizon/2.log" Jan 21 14:01:32 crc kubenswrapper[4765]: I0121 14:01:32.720411 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86c57777f6-gqpgv_1241b1f0-34c1-401a-b91f-13b72926cc2c/horizon/1.log" Jan 21 14:01:32 crc kubenswrapper[4765]: I0121 14:01:32.754647 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2_833e4a2d-2bcb-4dfe-90ba-2e239625d5bf/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:01:32 crc kubenswrapper[4765]: I0121 14:01:32.807340 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-4p9nk_d143acd1-ab20-495a-ba80-139132d247e2/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:01:33 crc kubenswrapper[4765]: I0121 14:01:33.065556 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7c5d9867cf-9ffzm_80b18085-cc60-4891-bf22-0c8535624d5b/keystone-api/0.log" Jan 21 14:01:33 crc kubenswrapper[4765]: I0121 14:01:33.076715 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29483401-4rwzk_dac68597-6a74-41ae-987b-e6968ab9931d/keystone-cron/0.log" Jan 21 14:01:33 crc kubenswrapper[4765]: I0121 14:01:33.101439 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a/kube-state-metrics/0.log" Jan 21 14:01:33 crc kubenswrapper[4765]: I0121 14:01:33.162677 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl_26624762-8a2d-4273-9f09-73895227b65c/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:01:36 crc kubenswrapper[4765]: I0121 14:01:36.580451 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:36 crc kubenswrapper[4765]: I0121 14:01:36.580956 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:36 crc kubenswrapper[4765]: I0121 14:01:36.654937 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:36 crc kubenswrapper[4765]: I0121 14:01:36.890568 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:36 crc kubenswrapper[4765]: I0121 14:01:36.947024 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qhvb"] Jan 21 14:01:38 crc kubenswrapper[4765]: I0121 14:01:38.844624 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4qhvb" podUID="bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400" containerName="registry-server" containerID="cri-o://56ed793ab11be9050cbe78427cba1189ca0a3aaaec3d81ca545777d9d0391acd" gracePeriod=2 Jan 21 14:01:39 crc kubenswrapper[4765]: I0121 14:01:39.854846 4765 generic.go:334] "Generic (PLEG): container finished" podID="bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400" containerID="56ed793ab11be9050cbe78427cba1189ca0a3aaaec3d81ca545777d9d0391acd" exitCode=0 Jan 21 14:01:39 crc kubenswrapper[4765]: I0121 14:01:39.855183 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qhvb" event={"ID":"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400","Type":"ContainerDied","Data":"56ed793ab11be9050cbe78427cba1189ca0a3aaaec3d81ca545777d9d0391acd"} Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.033549 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.158544 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-catalog-content\") pod \"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400\" (UID: \"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400\") " Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.158584 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dclrn\" (UniqueName: \"kubernetes.io/projected/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-kube-api-access-dclrn\") pod \"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400\" (UID: \"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400\") " Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.158664 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-utilities\") pod \"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400\" (UID: \"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400\") " Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.161916 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-utilities" (OuterVolumeSpecName: "utilities") pod "bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400" (UID: "bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.169061 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-kube-api-access-dclrn" (OuterVolumeSpecName: "kube-api-access-dclrn") pod "bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400" (UID: "bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400"). InnerVolumeSpecName "kube-api-access-dclrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.191104 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400" (UID: "bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.260301 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dclrn\" (UniqueName: \"kubernetes.io/projected/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-kube-api-access-dclrn\") on node \"crc\" DevicePath \"\"" Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.260478 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.260489 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.906344 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fjdnn/crc-debug-fk5cj" event={"ID":"5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7","Type":"ContainerStarted","Data":"0c8c07e5ac4a869a785d43bd8a59b06bdd013d43681647f4f7ff99c9e9e33b9a"} Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.910510 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qhvb" event={"ID":"bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400","Type":"ContainerDied","Data":"a2bffe13e0105351dea7218fdc7fed0a8c85a548595688265d055353ded673ff"} Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.910569 4765 scope.go:117] "RemoveContainer" containerID="56ed793ab11be9050cbe78427cba1189ca0a3aaaec3d81ca545777d9d0391acd" Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.910766 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qhvb" Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.927975 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fjdnn/crc-debug-fk5cj" podStartSLOduration=2.649593027 podStartE2EDuration="19.927951096s" podCreationTimestamp="2026-01-21 14:01:25 +0000 UTC" firstStartedPulling="2026-01-21 14:01:26.374183479 +0000 UTC m=+3547.391909291" lastFinishedPulling="2026-01-21 14:01:43.652541538 +0000 UTC m=+3564.670267360" observedRunningTime="2026-01-21 14:01:44.920546784 +0000 UTC m=+3565.938272616" watchObservedRunningTime="2026-01-21 14:01:44.927951096 +0000 UTC m=+3565.945676918" Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.929525 4765 scope.go:117] "RemoveContainer" containerID="9fc1c67b480decd41e64cfb0a290201cba78b25c6104620c3abd1615c1b911de" Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.966069 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qhvb"] Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.966310 4765 scope.go:117] "RemoveContainer" containerID="744d697a4bfa41888cc91de93754e1999fe668a54d244c45adfbf04a931967a4" Jan 21 14:01:44 crc kubenswrapper[4765]: I0121 14:01:44.978638 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qhvb"] Jan 21 14:01:45 crc kubenswrapper[4765]: I0121 14:01:45.628094 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400" path="/var/lib/kubelet/pods/bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400/volumes" Jan 21 14:01:51 crc kubenswrapper[4765]: I0121 14:01:51.498589 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skh9c_f05e7811-d30d-4f00-b816-a740a454c635/controller/0.log" Jan 21 14:01:51 crc kubenswrapper[4765]: I0121 14:01:51.506321 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skh9c_f05e7811-d30d-4f00-b816-a740a454c635/kube-rbac-proxy/0.log" Jan 21 14:01:51 crc kubenswrapper[4765]: I0121 14:01:51.532389 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-qlhwh_af902f5f-216b-41c7-b1e9-56953151dd65/frr-k8s-webhook-server/0.log" Jan 21 14:01:51 crc kubenswrapper[4765]: I0121 14:01:51.656769 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/controller/0.log" Jan 21 14:01:53 crc kubenswrapper[4765]: I0121 14:01:53.821200 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/frr/0.log" Jan 21 14:01:53 crc kubenswrapper[4765]: I0121 14:01:53.831702 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/reloader/0.log" Jan 21 14:01:53 crc kubenswrapper[4765]: I0121 14:01:53.839941 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/frr-metrics/0.log" Jan 21 14:01:53 crc kubenswrapper[4765]: I0121 14:01:53.850512 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/kube-rbac-proxy/0.log" Jan 21 14:01:53 crc kubenswrapper[4765]: I0121 14:01:53.864810 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/kube-rbac-proxy-frr/0.log" Jan 21 14:01:53 crc kubenswrapper[4765]: I0121 14:01:53.885067 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-frr-files/0.log" Jan 21 14:01:53 crc kubenswrapper[4765]: I0121 14:01:53.896594 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-reloader/0.log" Jan 21 14:01:53 crc kubenswrapper[4765]: I0121 14:01:53.907139 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-metrics/0.log" Jan 21 14:01:53 crc kubenswrapper[4765]: I0121 14:01:53.946712 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c66566bf6-ls8r8_57ed60d8-a38f-47ba-b66d-6e7e557b4399/manager/0.log" Jan 21 14:01:53 crc kubenswrapper[4765]: I0121 14:01:53.956392 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-77844fbdcc-cgv2c_7ba871a2-babc-4cc6-a13b-4fa78e3d0580/webhook-server/0.log" Jan 21 14:01:54 crc kubenswrapper[4765]: I0121 14:01:54.469761 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vswxq_8f59aeb8-b8fe-44bc-9e55-94eba06a676b/speaker/0.log" Jan 21 14:01:54 crc kubenswrapper[4765]: I0121 14:01:54.477288 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vswxq_8f59aeb8-b8fe-44bc-9e55-94eba06a676b/kube-rbac-proxy/0.log" Jan 21 14:01:54 crc kubenswrapper[4765]: I0121 14:01:54.870744 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_02d30b98-43d0-4b3f-82c0-64193524da98/memcached/0.log" Jan 21 14:01:54 crc kubenswrapper[4765]: I0121 14:01:54.958515 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77dcd8ffdf-64j8s_d069b575-51e3-4f93-bff8-a1f0cb141797/neutron-api/0.log" Jan 21 14:01:54 crc kubenswrapper[4765]: I0121 14:01:54.991981 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77dcd8ffdf-64j8s_d069b575-51e3-4f93-bff8-a1f0cb141797/neutron-httpd/0.log" Jan 21 14:01:55 crc kubenswrapper[4765]: I0121 14:01:55.030418 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs_8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:01:55 crc kubenswrapper[4765]: I0121 14:01:55.113465 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e6ce4b6e-90fe-41ba-a3e8-15fc98276798/nova-api-log/0.log" Jan 21 14:01:55 crc kubenswrapper[4765]: I0121 14:01:55.318846 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e6ce4b6e-90fe-41ba-a3e8-15fc98276798/nova-api-api/0.log" Jan 21 14:01:55 crc kubenswrapper[4765]: I0121 14:01:55.398882 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_79930bf0-36ee-4f2e-8530-0bcdf3c9d998/nova-cell0-conductor-conductor/0.log" Jan 21 14:01:55 crc kubenswrapper[4765]: I0121 14:01:55.485554 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_90f30caf-f36a-421c-b3fc-40d01f40d9e7/nova-cell1-conductor-conductor/0.log" Jan 21 14:01:55 crc kubenswrapper[4765]: I0121 14:01:55.553863 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a9571353-0716-428c-8462-0fa1c4fc8ab3/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 14:01:55 crc kubenswrapper[4765]: I0121 14:01:55.606706 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-pntmx_13a3818b-4be7-40d0-99d2-ae84ab4caceb/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:01:55 crc kubenswrapper[4765]: I0121 14:01:55.682630 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa/nova-metadata-log/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.673985 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa/nova-metadata-metadata/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.750018 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f1a509b9-a443-47bc-b693-4faa2e417ce8/nova-scheduler-scheduler/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.788558 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cf0cab45-7e21-4b1e-a868-b19db9379c99/galera/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.803401 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cf0cab45-7e21-4b1e-a868-b19db9379c99/mysql-bootstrap/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.828627 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_00d8ba34-9c69-4d77-a58a-e8202aa68b31/galera/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.840753 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_00d8ba34-9c69-4d77-a58a-e8202aa68b31/mysql-bootstrap/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.847631 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_344fdbd2-c402-42e4-83d5-7e0bb3b978f6/openstackclient/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.867047 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gkqpl_acf0ca9c-abda-4c3b-98d3-ca3e6189434a/ovn-controller/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.878071 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-zmx6x_2c7cc04a-963e-42e5-82ca-674e3e576a27/openstack-network-exporter/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.900135 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-64shj_0babea53-5832-46a5-a0e6-9fd9823cbbe9/ovsdb-server/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.909624 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-64shj_0babea53-5832-46a5-a0e6-9fd9823cbbe9/ovs-vswitchd/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.918128 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-64shj_0babea53-5832-46a5-a0e6-9fd9823cbbe9/ovsdb-server-init/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.956715 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-cqnjn_db5e6d29-c1aa-4a16-99a9-e2d559619d90/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.967093 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_729e9cbc-22fc-4dea-a03d-5ebcd6c5f183/ovn-northd/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.978355 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_729e9cbc-22fc-4dea-a03d-5ebcd6c5f183/openstack-network-exporter/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.995187 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3/ovsdbserver-nb/0.log" Jan 21 14:01:56 crc kubenswrapper[4765]: I0121 14:01:56.999635 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3/openstack-network-exporter/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.015894 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1cf8f51-de39-4833-807f-f5ace97d9c30/ovsdbserver-sb/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.020509 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1cf8f51-de39-4833-807f-f5ace97d9c30/openstack-network-exporter/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.094543 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-86cbcc788d-b897j_369424ef-89f9-462a-80aa-6eb36049f6b5/placement-log/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.137077 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-86cbcc788d-b897j_369424ef-89f9-462a-80aa-6eb36049f6b5/placement-api/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.156521 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f302fd12-fe7e-455b-94f0-aafe7ddb95f2/rabbitmq/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.161033 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f302fd12-fe7e-455b-94f0-aafe7ddb95f2/setup-container/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.193561 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_997a77bd-3d32-4db3-a34d-588eb0ea88a3/rabbitmq/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.198326 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_997a77bd-3d32-4db3-a34d-588eb0ea88a3/setup-container/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.231816 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb_b4fe3c7f-5af2-4efc-bd46-40f31624c194/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.273104 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7wzhn_2f4e0a44-0962-4477-9526-4df004dd3625/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.303121 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp_0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.319275 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-2kwpl_4de5f530-bcea-4203-8a79-9e9aebf97e0f/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.334309 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-g6wfz_8ea0edfd-ace0-474e-b868-7ad5bed77cab/ssh-known-hosts-edpm-deployment/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.423767 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-c67b7f46c-vdfh2_dcc230e6-cf6d-4fc2-bea2-9ba2b028716b/proxy-httpd/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.465369 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-c67b7f46c-vdfh2_dcc230e6-cf6d-4fc2-bea2-9ba2b028716b/proxy-server/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.479464 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-j5v45_60abe159-7e5d-4586-9d1b-0050de42edbe/swift-ring-rebalance/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.533731 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/account-server/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.565385 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/account-replicator/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.572574 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/account-auditor/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.592560 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/account-reaper/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.597887 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/container-server/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.630637 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/container-replicator/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.640713 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/container-auditor/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.651715 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/container-updater/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.659630 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/object-server/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.679290 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/object-replicator/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.701062 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/object-auditor/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.707273 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/object-updater/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.718046 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/object-expirer/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.724996 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/rsync/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.732411 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/swift-recon-cron/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.815307 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb_72b52054-c641-4cfb-9e83-f5b6794f77de/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.842775 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_65a8700b-dcb3-42d5-9655-61f2c977e9e2/tempest-tests-tempest-tests-runner/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.852755 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_30dbae35-d4af-4e14-831b-3c17f0e66a0c/test-operator-logs-container/0.log" Jan 21 14:01:57 crc kubenswrapper[4765]: I0121 14:01:57.870256 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn_f966a827-0001-4f9f-9600-072b24c50c9e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:02:11 crc kubenswrapper[4765]: I0121 14:02:11.338739 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/extract/0.log" Jan 21 14:02:11 crc kubenswrapper[4765]: I0121 14:02:11.350849 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/util/0.log" Jan 21 14:02:11 crc kubenswrapper[4765]: I0121 14:02:11.359348 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/pull/0.log" Jan 21 14:02:11 crc kubenswrapper[4765]: I0121 14:02:11.443252 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-848df65fbb-79lv9_448c57b9-0176-42e1-a493-609bc853db01/manager/0.log" Jan 21 14:02:11 crc kubenswrapper[4765]: I0121 14:02:11.491262 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-kq85p_cd5b6743-7a2a-4d03-8adc-952fb87e6f02/manager/0.log" Jan 21 14:02:11 crc kubenswrapper[4765]: I0121 14:02:11.512089 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-dgbtx_079ac5a2-3654-48e8-8bf0-597018fc2ca5/manager/0.log" Jan 21 14:02:11 crc kubenswrapper[4765]: I0121 14:02:11.600087 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-65hfk_4c92e105-ba8b-4828-bc30-857c5431672f/manager/0.log" Jan 21 14:02:11 crc kubenswrapper[4765]: I0121 14:02:11.621448 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-8pvpr_ab7eaa76-7a22-4d3c-85a3-9b643832d707/manager/0.log" Jan 21 14:02:11 crc kubenswrapper[4765]: I0121 14:02:11.698821 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-t42c2_00c36135-159f-43be-be7c-b4f01cf2ace7/manager/0.log" Jan 21 14:02:11 crc kubenswrapper[4765]: I0121 14:02:11.928145 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-c74jr_2962f7bb-1d22-4715-b609-2eb6da1de834/manager/0.log" Jan 21 14:02:11 crc kubenswrapper[4765]: I0121 14:02:11.949985 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rk4x7_2a3c28ee-e170-4592-8291-db76c15675d1/manager/0.log" Jan 21 14:02:12 crc kubenswrapper[4765]: I0121 14:02:12.003893 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-hv2dn_30a8ff01-0173-45a7-9460-9df64146234d/manager/0.log" Jan 21 14:02:12 crc kubenswrapper[4765]: I0121 14:02:12.021933 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rxxvb_c78d0245-2ac0-4576-860f-20c8ad7f7fa3/manager/0.log" Jan 21 14:02:12 crc kubenswrapper[4765]: I0121 14:02:12.065505 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-8kq4g_ecd5f054-6284-485a-8c41-6b2338a5c0f4/manager/0.log" Jan 21 14:02:12 crc kubenswrapper[4765]: I0121 14:02:12.114964 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-r429h_bdcf568f-99c9-4432-b763-ce16903da409/manager/0.log" Jan 21 14:02:12 crc kubenswrapper[4765]: I0121 14:02:12.187021 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-m48zr_953ef395-07f2-4b90-8232-77b94a176094/manager/0.log" Jan 21 14:02:12 crc kubenswrapper[4765]: I0121 14:02:12.210036 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-kh677_882965e2-7eb0-4971-9770-e750a8fe36dc/manager/0.log" Jan 21 14:02:12 crc kubenswrapper[4765]: I0121 14:02:12.242490 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7_246657ac-def3-41ce-bd99-a8d00d97c86b/manager/0.log" Jan 21 14:02:12 crc kubenswrapper[4765]: I0121 14:02:12.382142 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-ccbfb74b7-bm4rb_5db9c466-59ec-47fb-8643-560935c3c92c/operator/0.log" Jan 21 14:02:13 crc kubenswrapper[4765]: I0121 14:02:13.443952 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75fcf77584-5dfd7_af5f1c65-c317-4058-9d98-066b866bf83a/manager/0.log" Jan 21 14:02:13 crc kubenswrapper[4765]: I0121 14:02:13.456005 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-p9ml4_d35e26b9-ec61-4be2-b6f6-f40544f4094f/registry-server/0.log" Jan 21 14:02:13 crc kubenswrapper[4765]: I0121 14:02:13.503942 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-kvhff_17d3ffc3-5383-4beb-91d4-db120ddb1c74/manager/0.log" Jan 21 14:02:13 crc kubenswrapper[4765]: I0121 14:02:13.529654 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-97x9c_2bc79302-e5a0-4288-8b2e-ee371eb775a1/manager/0.log" Jan 21 14:02:13 crc kubenswrapper[4765]: I0121 14:02:13.566123 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ql7j4_cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99/operator/0.log" Jan 21 14:02:13 crc kubenswrapper[4765]: I0121 14:02:13.590743 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-gh9vl_c7a6160a-aef5-41af-b1cc-cc2cd97125d7/manager/0.log" Jan 21 14:02:13 crc kubenswrapper[4765]: I0121 14:02:13.643759 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-dhcgg_4c4840ab-a9b6-4243-a2f8-e21eaa84f165/manager/0.log" Jan 21 14:02:13 crc kubenswrapper[4765]: I0121 14:02:13.668401 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-s6zq8_be3fcc93-c1a3-4191-8f75-4d8aa5767593/manager/0.log" Jan 21 14:02:13 crc kubenswrapper[4765]: I0121 14:02:13.683772 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-8r9cq_2d19b122-8cf4-4b4a-8d31-037af2fd65fb/manager/0.log" Jan 21 14:02:20 crc kubenswrapper[4765]: I0121 14:02:20.930673 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-x4zpp_50ea39eb-559e-4298-9133-4d2a5c7890cb/control-plane-machine-set-operator/0.log" Jan 21 14:02:20 crc kubenswrapper[4765]: I0121 14:02:20.963132 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mnwzz_c35257f3-6d8a-4917-a956-3b71a0e54c23/kube-rbac-proxy/0.log" Jan 21 14:02:20 crc kubenswrapper[4765]: I0121 14:02:20.994007 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mnwzz_c35257f3-6d8a-4917-a956-3b71a0e54c23/machine-api-operator/0.log" Jan 21 14:02:26 crc kubenswrapper[4765]: I0121 14:02:26.353464 4765 generic.go:334] "Generic (PLEG): container finished" podID="5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7" containerID="0c8c07e5ac4a869a785d43bd8a59b06bdd013d43681647f4f7ff99c9e9e33b9a" exitCode=0 Jan 21 14:02:26 crc kubenswrapper[4765]: I0121 14:02:26.353540 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fjdnn/crc-debug-fk5cj" event={"ID":"5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7","Type":"ContainerDied","Data":"0c8c07e5ac4a869a785d43bd8a59b06bdd013d43681647f4f7ff99c9e9e33b9a"} Jan 21 14:02:27 crc kubenswrapper[4765]: I0121 14:02:27.490879 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/crc-debug-fk5cj" Jan 21 14:02:27 crc kubenswrapper[4765]: I0121 14:02:27.529346 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fjdnn/crc-debug-fk5cj"] Jan 21 14:02:27 crc kubenswrapper[4765]: I0121 14:02:27.539378 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fjdnn/crc-debug-fk5cj"] Jan 21 14:02:27 crc kubenswrapper[4765]: I0121 14:02:27.619203 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxv6f\" (UniqueName: \"kubernetes.io/projected/5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7-kube-api-access-gxv6f\") pod \"5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7\" (UID: \"5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7\") " Jan 21 14:02:27 crc kubenswrapper[4765]: I0121 14:02:27.619480 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7-host\") pod \"5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7\" (UID: \"5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7\") " Jan 21 14:02:27 crc kubenswrapper[4765]: I0121 14:02:27.622550 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7-host" (OuterVolumeSpecName: "host") pod "5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7" (UID: "5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:02:27 crc kubenswrapper[4765]: I0121 14:02:27.647731 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7-kube-api-access-gxv6f" (OuterVolumeSpecName: "kube-api-access-gxv6f") pod "5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7" (UID: "5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7"). InnerVolumeSpecName "kube-api-access-gxv6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:02:27 crc kubenswrapper[4765]: I0121 14:02:27.722401 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxv6f\" (UniqueName: \"kubernetes.io/projected/5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7-kube-api-access-gxv6f\") on node \"crc\" DevicePath \"\"" Jan 21 14:02:27 crc kubenswrapper[4765]: I0121 14:02:27.722449 4765 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7-host\") on node \"crc\" DevicePath \"\"" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.385255 4765 scope.go:117] "RemoveContainer" containerID="0c8c07e5ac4a869a785d43bd8a59b06bdd013d43681647f4f7ff99c9e9e33b9a" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.385652 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/crc-debug-fk5cj" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.729154 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fjdnn/crc-debug-vffqk"] Jan 21 14:02:28 crc kubenswrapper[4765]: E0121 14:02:28.729578 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7" containerName="container-00" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.729592 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7" containerName="container-00" Jan 21 14:02:28 crc kubenswrapper[4765]: E0121 14:02:28.729607 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400" containerName="extract-content" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.729614 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400" containerName="extract-content" Jan 21 14:02:28 crc kubenswrapper[4765]: E0121 14:02:28.729636 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400" containerName="registry-server" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.729643 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400" containerName="registry-server" Jan 21 14:02:28 crc kubenswrapper[4765]: E0121 14:02:28.729656 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400" containerName="extract-utilities" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.729663 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400" containerName="extract-utilities" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.729851 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7" containerName="container-00" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.729896 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf3cbb95-6b2e-41b7-bfe8-3cbb9935c400" containerName="registry-server" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.730540 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/crc-debug-vffqk" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.733517 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-fjdnn"/"default-dockercfg-9cx69" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.844267 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09efbfb2-b731-4ca5-93bb-8d176ee222d1-host\") pod \"crc-debug-vffqk\" (UID: \"09efbfb2-b731-4ca5-93bb-8d176ee222d1\") " pod="openshift-must-gather-fjdnn/crc-debug-vffqk" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.844444 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbflp\" (UniqueName: \"kubernetes.io/projected/09efbfb2-b731-4ca5-93bb-8d176ee222d1-kube-api-access-cbflp\") pod \"crc-debug-vffqk\" (UID: \"09efbfb2-b731-4ca5-93bb-8d176ee222d1\") " pod="openshift-must-gather-fjdnn/crc-debug-vffqk" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.946180 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09efbfb2-b731-4ca5-93bb-8d176ee222d1-host\") pod \"crc-debug-vffqk\" (UID: \"09efbfb2-b731-4ca5-93bb-8d176ee222d1\") " pod="openshift-must-gather-fjdnn/crc-debug-vffqk" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.946342 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09efbfb2-b731-4ca5-93bb-8d176ee222d1-host\") pod \"crc-debug-vffqk\" (UID: \"09efbfb2-b731-4ca5-93bb-8d176ee222d1\") " pod="openshift-must-gather-fjdnn/crc-debug-vffqk" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.946411 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbflp\" (UniqueName: \"kubernetes.io/projected/09efbfb2-b731-4ca5-93bb-8d176ee222d1-kube-api-access-cbflp\") pod \"crc-debug-vffqk\" (UID: \"09efbfb2-b731-4ca5-93bb-8d176ee222d1\") " pod="openshift-must-gather-fjdnn/crc-debug-vffqk" Jan 21 14:02:28 crc kubenswrapper[4765]: I0121 14:02:28.964135 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbflp\" (UniqueName: \"kubernetes.io/projected/09efbfb2-b731-4ca5-93bb-8d176ee222d1-kube-api-access-cbflp\") pod \"crc-debug-vffqk\" (UID: \"09efbfb2-b731-4ca5-93bb-8d176ee222d1\") " pod="openshift-must-gather-fjdnn/crc-debug-vffqk" Jan 21 14:02:29 crc kubenswrapper[4765]: I0121 14:02:29.048082 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/crc-debug-vffqk" Jan 21 14:02:29 crc kubenswrapper[4765]: W0121 14:02:29.079533 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09efbfb2_b731_4ca5_93bb_8d176ee222d1.slice/crio-2e326a5ae5e0c6f03adb63d8bd1e6860f573107ef9d229fdae2ebf2e3fc18db2 WatchSource:0}: Error finding container 2e326a5ae5e0c6f03adb63d8bd1e6860f573107ef9d229fdae2ebf2e3fc18db2: Status 404 returned error can't find the container with id 2e326a5ae5e0c6f03adb63d8bd1e6860f573107ef9d229fdae2ebf2e3fc18db2 Jan 21 14:02:29 crc kubenswrapper[4765]: I0121 14:02:29.395830 4765 generic.go:334] "Generic (PLEG): container finished" podID="09efbfb2-b731-4ca5-93bb-8d176ee222d1" containerID="a85524050ca98a34d7f437eafd328cd7d181cd4e2e07191805caf0538b6ebfae" exitCode=0 Jan 21 14:02:29 crc kubenswrapper[4765]: I0121 14:02:29.395927 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fjdnn/crc-debug-vffqk" event={"ID":"09efbfb2-b731-4ca5-93bb-8d176ee222d1","Type":"ContainerDied","Data":"a85524050ca98a34d7f437eafd328cd7d181cd4e2e07191805caf0538b6ebfae"} Jan 21 14:02:29 crc kubenswrapper[4765]: I0121 14:02:29.396201 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fjdnn/crc-debug-vffqk" event={"ID":"09efbfb2-b731-4ca5-93bb-8d176ee222d1","Type":"ContainerStarted","Data":"2e326a5ae5e0c6f03adb63d8bd1e6860f573107ef9d229fdae2ebf2e3fc18db2"} Jan 21 14:02:29 crc kubenswrapper[4765]: I0121 14:02:29.624798 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7" path="/var/lib/kubelet/pods/5c44ecbb-39c0-468d-b46b-9bc6bdc14bd7/volumes" Jan 21 14:02:29 crc kubenswrapper[4765]: I0121 14:02:29.893746 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fjdnn/crc-debug-vffqk"] Jan 21 14:02:29 crc kubenswrapper[4765]: I0121 14:02:29.903668 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fjdnn/crc-debug-vffqk"] Jan 21 14:02:30 crc kubenswrapper[4765]: I0121 14:02:30.502172 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/crc-debug-vffqk" Jan 21 14:02:30 crc kubenswrapper[4765]: I0121 14:02:30.579063 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbflp\" (UniqueName: \"kubernetes.io/projected/09efbfb2-b731-4ca5-93bb-8d176ee222d1-kube-api-access-cbflp\") pod \"09efbfb2-b731-4ca5-93bb-8d176ee222d1\" (UID: \"09efbfb2-b731-4ca5-93bb-8d176ee222d1\") " Jan 21 14:02:30 crc kubenswrapper[4765]: I0121 14:02:30.579433 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09efbfb2-b731-4ca5-93bb-8d176ee222d1-host\") pod \"09efbfb2-b731-4ca5-93bb-8d176ee222d1\" (UID: \"09efbfb2-b731-4ca5-93bb-8d176ee222d1\") " Jan 21 14:02:30 crc kubenswrapper[4765]: I0121 14:02:30.579521 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09efbfb2-b731-4ca5-93bb-8d176ee222d1-host" (OuterVolumeSpecName: "host") pod "09efbfb2-b731-4ca5-93bb-8d176ee222d1" (UID: "09efbfb2-b731-4ca5-93bb-8d176ee222d1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:02:30 crc kubenswrapper[4765]: I0121 14:02:30.580154 4765 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/09efbfb2-b731-4ca5-93bb-8d176ee222d1-host\") on node \"crc\" DevicePath \"\"" Jan 21 14:02:30 crc kubenswrapper[4765]: I0121 14:02:30.585454 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efbfb2-b731-4ca5-93bb-8d176ee222d1-kube-api-access-cbflp" (OuterVolumeSpecName: "kube-api-access-cbflp") pod "09efbfb2-b731-4ca5-93bb-8d176ee222d1" (UID: "09efbfb2-b731-4ca5-93bb-8d176ee222d1"). InnerVolumeSpecName "kube-api-access-cbflp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:02:30 crc kubenswrapper[4765]: I0121 14:02:30.682805 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbflp\" (UniqueName: \"kubernetes.io/projected/09efbfb2-b731-4ca5-93bb-8d176ee222d1-kube-api-access-cbflp\") on node \"crc\" DevicePath \"\"" Jan 21 14:02:31 crc kubenswrapper[4765]: I0121 14:02:31.114598 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fjdnn/crc-debug-g9qvm"] Jan 21 14:02:31 crc kubenswrapper[4765]: E0121 14:02:31.114987 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09efbfb2-b731-4ca5-93bb-8d176ee222d1" containerName="container-00" Jan 21 14:02:31 crc kubenswrapper[4765]: I0121 14:02:31.114999 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="09efbfb2-b731-4ca5-93bb-8d176ee222d1" containerName="container-00" Jan 21 14:02:31 crc kubenswrapper[4765]: I0121 14:02:31.115187 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="09efbfb2-b731-4ca5-93bb-8d176ee222d1" containerName="container-00" Jan 21 14:02:31 crc kubenswrapper[4765]: I0121 14:02:31.115789 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/crc-debug-g9qvm" Jan 21 14:02:31 crc kubenswrapper[4765]: I0121 14:02:31.191710 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/674b9517-da12-4329-90a3-5b07ca8ab3dd-host\") pod \"crc-debug-g9qvm\" (UID: \"674b9517-da12-4329-90a3-5b07ca8ab3dd\") " pod="openshift-must-gather-fjdnn/crc-debug-g9qvm" Jan 21 14:02:31 crc kubenswrapper[4765]: I0121 14:02:31.192089 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v58s7\" (UniqueName: \"kubernetes.io/projected/674b9517-da12-4329-90a3-5b07ca8ab3dd-kube-api-access-v58s7\") pod \"crc-debug-g9qvm\" (UID: \"674b9517-da12-4329-90a3-5b07ca8ab3dd\") " pod="openshift-must-gather-fjdnn/crc-debug-g9qvm" Jan 21 14:02:31 crc kubenswrapper[4765]: I0121 14:02:31.296688 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v58s7\" (UniqueName: \"kubernetes.io/projected/674b9517-da12-4329-90a3-5b07ca8ab3dd-kube-api-access-v58s7\") pod \"crc-debug-g9qvm\" (UID: \"674b9517-da12-4329-90a3-5b07ca8ab3dd\") " pod="openshift-must-gather-fjdnn/crc-debug-g9qvm" Jan 21 14:02:31 crc kubenswrapper[4765]: I0121 14:02:31.296858 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/674b9517-da12-4329-90a3-5b07ca8ab3dd-host\") pod \"crc-debug-g9qvm\" (UID: \"674b9517-da12-4329-90a3-5b07ca8ab3dd\") " pod="openshift-must-gather-fjdnn/crc-debug-g9qvm" Jan 21 14:02:31 crc kubenswrapper[4765]: I0121 14:02:31.296950 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/674b9517-da12-4329-90a3-5b07ca8ab3dd-host\") pod \"crc-debug-g9qvm\" (UID: \"674b9517-da12-4329-90a3-5b07ca8ab3dd\") " pod="openshift-must-gather-fjdnn/crc-debug-g9qvm" Jan 21 14:02:31 crc kubenswrapper[4765]: I0121 14:02:31.313696 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v58s7\" (UniqueName: \"kubernetes.io/projected/674b9517-da12-4329-90a3-5b07ca8ab3dd-kube-api-access-v58s7\") pod \"crc-debug-g9qvm\" (UID: \"674b9517-da12-4329-90a3-5b07ca8ab3dd\") " pod="openshift-must-gather-fjdnn/crc-debug-g9qvm" Jan 21 14:02:31 crc kubenswrapper[4765]: I0121 14:02:31.431711 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e326a5ae5e0c6f03adb63d8bd1e6860f573107ef9d229fdae2ebf2e3fc18db2" Jan 21 14:02:31 crc kubenswrapper[4765]: I0121 14:02:31.432044 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/crc-debug-vffqk" Jan 21 14:02:31 crc kubenswrapper[4765]: I0121 14:02:31.436373 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/crc-debug-g9qvm" Jan 21 14:02:31 crc kubenswrapper[4765]: I0121 14:02:31.627435 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efbfb2-b731-4ca5-93bb-8d176ee222d1" path="/var/lib/kubelet/pods/09efbfb2-b731-4ca5-93bb-8d176ee222d1/volumes" Jan 21 14:02:32 crc kubenswrapper[4765]: I0121 14:02:32.440934 4765 generic.go:334] "Generic (PLEG): container finished" podID="674b9517-da12-4329-90a3-5b07ca8ab3dd" containerID="eeaba6abb37d3730ce8b3f9849871a023491a1f3fdadb940ab84f9456a7024f6" exitCode=0 Jan 21 14:02:32 crc kubenswrapper[4765]: I0121 14:02:32.441079 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fjdnn/crc-debug-g9qvm" event={"ID":"674b9517-da12-4329-90a3-5b07ca8ab3dd","Type":"ContainerDied","Data":"eeaba6abb37d3730ce8b3f9849871a023491a1f3fdadb940ab84f9456a7024f6"} Jan 21 14:02:32 crc kubenswrapper[4765]: I0121 14:02:32.441287 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fjdnn/crc-debug-g9qvm" event={"ID":"674b9517-da12-4329-90a3-5b07ca8ab3dd","Type":"ContainerStarted","Data":"7bc45b27ca46a15e7d92244b5ae2401d03ce9176f8a65240a121f9c50904490c"} Jan 21 14:02:32 crc kubenswrapper[4765]: I0121 14:02:32.507736 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fjdnn/crc-debug-g9qvm"] Jan 21 14:02:32 crc kubenswrapper[4765]: I0121 14:02:32.517143 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fjdnn/crc-debug-g9qvm"] Jan 21 14:02:33 crc kubenswrapper[4765]: I0121 14:02:33.103758 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-cssjm_30c79cf6-f62c-498b-8c0b-184d3eec661f/cert-manager-controller/0.log" Jan 21 14:02:33 crc kubenswrapper[4765]: I0121 14:02:33.119522 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-7gnzb_861d65e3-bec0-4a97-9ef1-2ff8d0c660fe/cert-manager-cainjector/0.log" Jan 21 14:02:33 crc kubenswrapper[4765]: I0121 14:02:33.129360 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-gznfw_34bef5eb-722e-4dd8-b19a-ae2ec67a4c93/cert-manager-webhook/0.log" Jan 21 14:02:33 crc kubenswrapper[4765]: I0121 14:02:33.552549 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/crc-debug-g9qvm" Jan 21 14:02:33 crc kubenswrapper[4765]: I0121 14:02:33.642040 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/674b9517-da12-4329-90a3-5b07ca8ab3dd-host\") pod \"674b9517-da12-4329-90a3-5b07ca8ab3dd\" (UID: \"674b9517-da12-4329-90a3-5b07ca8ab3dd\") " Jan 21 14:02:33 crc kubenswrapper[4765]: I0121 14:02:33.642132 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/674b9517-da12-4329-90a3-5b07ca8ab3dd-host" (OuterVolumeSpecName: "host") pod "674b9517-da12-4329-90a3-5b07ca8ab3dd" (UID: "674b9517-da12-4329-90a3-5b07ca8ab3dd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:02:33 crc kubenswrapper[4765]: I0121 14:02:33.642313 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v58s7\" (UniqueName: \"kubernetes.io/projected/674b9517-da12-4329-90a3-5b07ca8ab3dd-kube-api-access-v58s7\") pod \"674b9517-da12-4329-90a3-5b07ca8ab3dd\" (UID: \"674b9517-da12-4329-90a3-5b07ca8ab3dd\") " Jan 21 14:02:33 crc kubenswrapper[4765]: I0121 14:02:33.642675 4765 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/674b9517-da12-4329-90a3-5b07ca8ab3dd-host\") on node \"crc\" DevicePath \"\"" Jan 21 14:02:33 crc kubenswrapper[4765]: I0121 14:02:33.659434 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/674b9517-da12-4329-90a3-5b07ca8ab3dd-kube-api-access-v58s7" (OuterVolumeSpecName: "kube-api-access-v58s7") pod "674b9517-da12-4329-90a3-5b07ca8ab3dd" (UID: "674b9517-da12-4329-90a3-5b07ca8ab3dd"). InnerVolumeSpecName "kube-api-access-v58s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:02:33 crc kubenswrapper[4765]: I0121 14:02:33.744180 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v58s7\" (UniqueName: \"kubernetes.io/projected/674b9517-da12-4329-90a3-5b07ca8ab3dd-kube-api-access-v58s7\") on node \"crc\" DevicePath \"\"" Jan 21 14:02:34 crc kubenswrapper[4765]: I0121 14:02:34.457933 4765 scope.go:117] "RemoveContainer" containerID="eeaba6abb37d3730ce8b3f9849871a023491a1f3fdadb940ab84f9456a7024f6" Jan 21 14:02:34 crc kubenswrapper[4765]: I0121 14:02:34.458130 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/crc-debug-g9qvm" Jan 21 14:02:35 crc kubenswrapper[4765]: I0121 14:02:35.626776 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="674b9517-da12-4329-90a3-5b07ca8ab3dd" path="/var/lib/kubelet/pods/674b9517-da12-4329-90a3-5b07ca8ab3dd/volumes" Jan 21 14:02:39 crc kubenswrapper[4765]: I0121 14:02:39.373008 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-kgmtc_79ffb165-f80d-428c-a29e-998f1a119cd7/nmstate-console-plugin/0.log" Jan 21 14:02:39 crc kubenswrapper[4765]: I0121 14:02:39.425479 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lbjjz_0da8e178-dbab-4c9c-9e7a-503796386d6f/nmstate-handler/0.log" Jan 21 14:02:39 crc kubenswrapper[4765]: I0121 14:02:39.441401 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b2d62_7d962382-89ac-40cc-92b2-0bb0a8cecc4d/nmstate-metrics/0.log" Jan 21 14:02:39 crc kubenswrapper[4765]: I0121 14:02:39.448381 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b2d62_7d962382-89ac-40cc-92b2-0bb0a8cecc4d/kube-rbac-proxy/0.log" Jan 21 14:02:39 crc kubenswrapper[4765]: I0121 14:02:39.465675 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-fhpqb_26e746e8-47b5-4944-957d-5d43a89b207b/nmstate-operator/0.log" Jan 21 14:02:39 crc kubenswrapper[4765]: I0121 14:02:39.482487 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-lmj8n_a847c8c4-dd77-4cd8-9e06-5adb119c43fc/nmstate-webhook/0.log" Jan 21 14:02:47 crc kubenswrapper[4765]: I0121 14:02:47.189275 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xlxjq"] Jan 21 14:02:47 crc kubenswrapper[4765]: E0121 14:02:47.190052 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674b9517-da12-4329-90a3-5b07ca8ab3dd" containerName="container-00" Jan 21 14:02:47 crc kubenswrapper[4765]: I0121 14:02:47.190072 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="674b9517-da12-4329-90a3-5b07ca8ab3dd" containerName="container-00" Jan 21 14:02:47 crc kubenswrapper[4765]: I0121 14:02:47.190298 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="674b9517-da12-4329-90a3-5b07ca8ab3dd" containerName="container-00" Jan 21 14:02:47 crc kubenswrapper[4765]: I0121 14:02:47.191623 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:02:47 crc kubenswrapper[4765]: I0121 14:02:47.214886 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xlxjq"] Jan 21 14:02:47 crc kubenswrapper[4765]: I0121 14:02:47.385780 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2tgd\" (UniqueName: \"kubernetes.io/projected/dcd7d700-640c-4b38-8589-1c6ac6d01688-kube-api-access-j2tgd\") pod \"community-operators-xlxjq\" (UID: \"dcd7d700-640c-4b38-8589-1c6ac6d01688\") " pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:02:47 crc kubenswrapper[4765]: I0121 14:02:47.386054 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd7d700-640c-4b38-8589-1c6ac6d01688-catalog-content\") pod \"community-operators-xlxjq\" (UID: \"dcd7d700-640c-4b38-8589-1c6ac6d01688\") " pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:02:47 crc kubenswrapper[4765]: I0121 14:02:47.386150 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd7d700-640c-4b38-8589-1c6ac6d01688-utilities\") pod \"community-operators-xlxjq\" (UID: \"dcd7d700-640c-4b38-8589-1c6ac6d01688\") " pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:02:47 crc kubenswrapper[4765]: I0121 14:02:47.487836 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2tgd\" (UniqueName: \"kubernetes.io/projected/dcd7d700-640c-4b38-8589-1c6ac6d01688-kube-api-access-j2tgd\") pod \"community-operators-xlxjq\" (UID: \"dcd7d700-640c-4b38-8589-1c6ac6d01688\") " pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:02:47 crc kubenswrapper[4765]: I0121 14:02:47.488292 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd7d700-640c-4b38-8589-1c6ac6d01688-catalog-content\") pod \"community-operators-xlxjq\" (UID: \"dcd7d700-640c-4b38-8589-1c6ac6d01688\") " pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:02:47 crc kubenswrapper[4765]: I0121 14:02:47.488325 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd7d700-640c-4b38-8589-1c6ac6d01688-utilities\") pod \"community-operators-xlxjq\" (UID: \"dcd7d700-640c-4b38-8589-1c6ac6d01688\") " pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:02:47 crc kubenswrapper[4765]: I0121 14:02:47.488693 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd7d700-640c-4b38-8589-1c6ac6d01688-catalog-content\") pod \"community-operators-xlxjq\" (UID: \"dcd7d700-640c-4b38-8589-1c6ac6d01688\") " pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:02:47 crc kubenswrapper[4765]: I0121 14:02:47.488821 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd7d700-640c-4b38-8589-1c6ac6d01688-utilities\") pod \"community-operators-xlxjq\" (UID: \"dcd7d700-640c-4b38-8589-1c6ac6d01688\") " pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:02:47 crc kubenswrapper[4765]: I0121 14:02:47.508441 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2tgd\" (UniqueName: \"kubernetes.io/projected/dcd7d700-640c-4b38-8589-1c6ac6d01688-kube-api-access-j2tgd\") pod \"community-operators-xlxjq\" (UID: \"dcd7d700-640c-4b38-8589-1c6ac6d01688\") " pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:02:47 crc kubenswrapper[4765]: I0121 14:02:47.515145 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:02:48 crc kubenswrapper[4765]: I0121 14:02:48.119211 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xlxjq"] Jan 21 14:02:48 crc kubenswrapper[4765]: I0121 14:02:48.586736 4765 generic.go:334] "Generic (PLEG): container finished" podID="dcd7d700-640c-4b38-8589-1c6ac6d01688" containerID="225ffd83b8653d2f46c073a3373ed9e313031a76c41f0c0a81b8e4744f16b4eb" exitCode=0 Jan 21 14:02:48 crc kubenswrapper[4765]: I0121 14:02:48.586788 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlxjq" event={"ID":"dcd7d700-640c-4b38-8589-1c6ac6d01688","Type":"ContainerDied","Data":"225ffd83b8653d2f46c073a3373ed9e313031a76c41f0c0a81b8e4744f16b4eb"} Jan 21 14:02:48 crc kubenswrapper[4765]: I0121 14:02:48.587895 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlxjq" event={"ID":"dcd7d700-640c-4b38-8589-1c6ac6d01688","Type":"ContainerStarted","Data":"cec7134a210e6395d47fc8d315c2f7ba34da4999162b9bbd31566f1dc1b5271a"} Jan 21 14:02:49 crc kubenswrapper[4765]: I0121 14:02:49.599228 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlxjq" event={"ID":"dcd7d700-640c-4b38-8589-1c6ac6d01688","Type":"ContainerStarted","Data":"0a257b5973c5de28e06830abb5de2df52a41e0cd33a3c662e9d01d045fc4fb94"} Jan 21 14:02:50 crc kubenswrapper[4765]: I0121 14:02:50.610994 4765 generic.go:334] "Generic (PLEG): container finished" podID="dcd7d700-640c-4b38-8589-1c6ac6d01688" containerID="0a257b5973c5de28e06830abb5de2df52a41e0cd33a3c662e9d01d045fc4fb94" exitCode=0 Jan 21 14:02:50 crc kubenswrapper[4765]: I0121 14:02:50.611042 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlxjq" event={"ID":"dcd7d700-640c-4b38-8589-1c6ac6d01688","Type":"ContainerDied","Data":"0a257b5973c5de28e06830abb5de2df52a41e0cd33a3c662e9d01d045fc4fb94"} Jan 21 14:02:51 crc kubenswrapper[4765]: I0121 14:02:51.623332 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlxjq" event={"ID":"dcd7d700-640c-4b38-8589-1c6ac6d01688","Type":"ContainerStarted","Data":"82214c01dba06fa223fc7a334dce886b3236ebe157b624f7fa6a903bfcb0b1e1"} Jan 21 14:02:51 crc kubenswrapper[4765]: I0121 14:02:51.652852 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xlxjq" podStartSLOduration=1.859188528 podStartE2EDuration="4.65282639s" podCreationTimestamp="2026-01-21 14:02:47 +0000 UTC" firstStartedPulling="2026-01-21 14:02:48.588587816 +0000 UTC m=+3629.606313638" lastFinishedPulling="2026-01-21 14:02:51.382225668 +0000 UTC m=+3632.399951500" observedRunningTime="2026-01-21 14:02:51.648403863 +0000 UTC m=+3632.666129725" watchObservedRunningTime="2026-01-21 14:02:51.65282639 +0000 UTC m=+3632.670552232" Jan 21 14:02:53 crc kubenswrapper[4765]: I0121 14:02:53.225280 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skh9c_f05e7811-d30d-4f00-b816-a740a454c635/controller/0.log" Jan 21 14:02:53 crc kubenswrapper[4765]: I0121 14:02:53.233425 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skh9c_f05e7811-d30d-4f00-b816-a740a454c635/kube-rbac-proxy/0.log" Jan 21 14:02:53 crc kubenswrapper[4765]: I0121 14:02:53.248798 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-qlhwh_af902f5f-216b-41c7-b1e9-56953151dd65/frr-k8s-webhook-server/0.log" Jan 21 14:02:53 crc kubenswrapper[4765]: I0121 14:02:53.288362 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/controller/0.log" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.577819 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fmgwk"] Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.583952 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.597781 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fmgwk"] Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.692025 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/frr/0.log" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.698773 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/reloader/0.log" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.709786 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/frr-metrics/0.log" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.726592 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/kube-rbac-proxy/0.log" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.732047 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c8df9ad-a209-42ac-87fb-44a39bf09e47-catalog-content\") pod \"certified-operators-fmgwk\" (UID: \"7c8df9ad-a209-42ac-87fb-44a39bf09e47\") " pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.732334 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c8df9ad-a209-42ac-87fb-44a39bf09e47-utilities\") pod \"certified-operators-fmgwk\" (UID: \"7c8df9ad-a209-42ac-87fb-44a39bf09e47\") " pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.732506 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4nd9\" (UniqueName: \"kubernetes.io/projected/7c8df9ad-a209-42ac-87fb-44a39bf09e47-kube-api-access-d4nd9\") pod \"certified-operators-fmgwk\" (UID: \"7c8df9ad-a209-42ac-87fb-44a39bf09e47\") " pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.736884 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/kube-rbac-proxy-frr/0.log" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.750287 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-frr-files/0.log" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.764148 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-reloader/0.log" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.774778 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-metrics/0.log" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.799860 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c66566bf6-ls8r8_57ed60d8-a38f-47ba-b66d-6e7e557b4399/manager/0.log" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.812138 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-77844fbdcc-cgv2c_7ba871a2-babc-4cc6-a13b-4fa78e3d0580/webhook-server/0.log" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.835950 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c8df9ad-a209-42ac-87fb-44a39bf09e47-catalog-content\") pod \"certified-operators-fmgwk\" (UID: \"7c8df9ad-a209-42ac-87fb-44a39bf09e47\") " pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.836040 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c8df9ad-a209-42ac-87fb-44a39bf09e47-utilities\") pod \"certified-operators-fmgwk\" (UID: \"7c8df9ad-a209-42ac-87fb-44a39bf09e47\") " pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.836089 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4nd9\" (UniqueName: \"kubernetes.io/projected/7c8df9ad-a209-42ac-87fb-44a39bf09e47-kube-api-access-d4nd9\") pod \"certified-operators-fmgwk\" (UID: \"7c8df9ad-a209-42ac-87fb-44a39bf09e47\") " pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.836901 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c8df9ad-a209-42ac-87fb-44a39bf09e47-catalog-content\") pod \"certified-operators-fmgwk\" (UID: \"7c8df9ad-a209-42ac-87fb-44a39bf09e47\") " pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.837323 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c8df9ad-a209-42ac-87fb-44a39bf09e47-utilities\") pod \"certified-operators-fmgwk\" (UID: \"7c8df9ad-a209-42ac-87fb-44a39bf09e47\") " pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.878384 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4nd9\" (UniqueName: \"kubernetes.io/projected/7c8df9ad-a209-42ac-87fb-44a39bf09e47-kube-api-access-d4nd9\") pod \"certified-operators-fmgwk\" (UID: \"7c8df9ad-a209-42ac-87fb-44a39bf09e47\") " pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:02:54 crc kubenswrapper[4765]: I0121 14:02:54.934888 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:02:55 crc kubenswrapper[4765]: I0121 14:02:55.378342 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vswxq_8f59aeb8-b8fe-44bc-9e55-94eba06a676b/speaker/0.log" Jan 21 14:02:55 crc kubenswrapper[4765]: I0121 14:02:55.407546 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vswxq_8f59aeb8-b8fe-44bc-9e55-94eba06a676b/kube-rbac-proxy/0.log" Jan 21 14:02:55 crc kubenswrapper[4765]: W0121 14:02:55.539768 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c8df9ad_a209_42ac_87fb_44a39bf09e47.slice/crio-41d5a39e85f89df9ec1b151299dcc987179b0feba97bfd151a499720c1c26f99 WatchSource:0}: Error finding container 41d5a39e85f89df9ec1b151299dcc987179b0feba97bfd151a499720c1c26f99: Status 404 returned error can't find the container with id 41d5a39e85f89df9ec1b151299dcc987179b0feba97bfd151a499720c1c26f99 Jan 21 14:02:55 crc kubenswrapper[4765]: I0121 14:02:55.540626 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fmgwk"] Jan 21 14:02:55 crc kubenswrapper[4765]: I0121 14:02:55.656358 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmgwk" event={"ID":"7c8df9ad-a209-42ac-87fb-44a39bf09e47","Type":"ContainerStarted","Data":"41d5a39e85f89df9ec1b151299dcc987179b0feba97bfd151a499720c1c26f99"} Jan 21 14:02:56 crc kubenswrapper[4765]: I0121 14:02:56.676502 4765 generic.go:334] "Generic (PLEG): container finished" podID="7c8df9ad-a209-42ac-87fb-44a39bf09e47" containerID="0d690cefa16a2e4fa0ac81e37b2f92e3c9a334304d59e2c23e7e176e5aabb17a" exitCode=0 Jan 21 14:02:56 crc kubenswrapper[4765]: I0121 14:02:56.676649 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmgwk" event={"ID":"7c8df9ad-a209-42ac-87fb-44a39bf09e47","Type":"ContainerDied","Data":"0d690cefa16a2e4fa0ac81e37b2f92e3c9a334304d59e2c23e7e176e5aabb17a"} Jan 21 14:02:57 crc kubenswrapper[4765]: I0121 14:02:57.515425 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:02:57 crc kubenswrapper[4765]: I0121 14:02:57.515799 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:02:57 crc kubenswrapper[4765]: I0121 14:02:57.563340 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:02:57 crc kubenswrapper[4765]: I0121 14:02:57.691671 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmgwk" event={"ID":"7c8df9ad-a209-42ac-87fb-44a39bf09e47","Type":"ContainerStarted","Data":"b83bc0581f019c828ebf11a902b73e3d9da843286a50451185c97aef417a30dc"} Jan 21 14:02:57 crc kubenswrapper[4765]: I0121 14:02:57.746234 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:02:58 crc kubenswrapper[4765]: I0121 14:02:58.698460 4765 generic.go:334] "Generic (PLEG): container finished" podID="7c8df9ad-a209-42ac-87fb-44a39bf09e47" containerID="b83bc0581f019c828ebf11a902b73e3d9da843286a50451185c97aef417a30dc" exitCode=0 Jan 21 14:02:58 crc kubenswrapper[4765]: I0121 14:02:58.698550 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmgwk" event={"ID":"7c8df9ad-a209-42ac-87fb-44a39bf09e47","Type":"ContainerDied","Data":"b83bc0581f019c828ebf11a902b73e3d9da843286a50451185c97aef417a30dc"} Jan 21 14:02:59 crc kubenswrapper[4765]: I0121 14:02:59.709218 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmgwk" event={"ID":"7c8df9ad-a209-42ac-87fb-44a39bf09e47","Type":"ContainerStarted","Data":"7f29818bfff0ab6ae56089c45ade6fe07a0ec7bd0002a8cbd39df050a09aa645"} Jan 21 14:02:59 crc kubenswrapper[4765]: I0121 14:02:59.745232 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fmgwk" podStartSLOduration=3.2356679760000002 podStartE2EDuration="5.745200138s" podCreationTimestamp="2026-01-21 14:02:54 +0000 UTC" firstStartedPulling="2026-01-21 14:02:56.683797785 +0000 UTC m=+3637.701523607" lastFinishedPulling="2026-01-21 14:02:59.193329937 +0000 UTC m=+3640.211055769" observedRunningTime="2026-01-21 14:02:59.737668562 +0000 UTC m=+3640.755394384" watchObservedRunningTime="2026-01-21 14:02:59.745200138 +0000 UTC m=+3640.762925960" Jan 21 14:02:59 crc kubenswrapper[4765]: I0121 14:02:59.768434 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xlxjq"] Jan 21 14:02:59 crc kubenswrapper[4765]: I0121 14:02:59.768855 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xlxjq" podUID="dcd7d700-640c-4b38-8589-1c6ac6d01688" containerName="registry-server" containerID="cri-o://82214c01dba06fa223fc7a334dce886b3236ebe157b624f7fa6a903bfcb0b1e1" gracePeriod=2 Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.025801 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w_d73b65cf-eba0-49dd-81ad-0fb0431092b8/extract/0.log" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.035196 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w_d73b65cf-eba0-49dd-81ad-0fb0431092b8/util/0.log" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.048521 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w_d73b65cf-eba0-49dd-81ad-0fb0431092b8/pull/0.log" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.066576 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22_68e5ceb6-2341-4976-8588-ecdd97e94b29/extract/0.log" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.084727 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22_68e5ceb6-2341-4976-8588-ecdd97e94b29/util/0.log" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.102982 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22_68e5ceb6-2341-4976-8588-ecdd97e94b29/pull/0.log" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.132053 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fmgwk_7c8df9ad-a209-42ac-87fb-44a39bf09e47/registry-server/0.log" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.141767 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fmgwk_7c8df9ad-a209-42ac-87fb-44a39bf09e47/extract-utilities/0.log" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.152492 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fmgwk_7c8df9ad-a209-42ac-87fb-44a39bf09e47/extract-content/0.log" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.235378 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.371947 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd7d700-640c-4b38-8589-1c6ac6d01688-utilities\") pod \"dcd7d700-640c-4b38-8589-1c6ac6d01688\" (UID: \"dcd7d700-640c-4b38-8589-1c6ac6d01688\") " Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.372054 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd7d700-640c-4b38-8589-1c6ac6d01688-catalog-content\") pod \"dcd7d700-640c-4b38-8589-1c6ac6d01688\" (UID: \"dcd7d700-640c-4b38-8589-1c6ac6d01688\") " Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.372084 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2tgd\" (UniqueName: \"kubernetes.io/projected/dcd7d700-640c-4b38-8589-1c6ac6d01688-kube-api-access-j2tgd\") pod \"dcd7d700-640c-4b38-8589-1c6ac6d01688\" (UID: \"dcd7d700-640c-4b38-8589-1c6ac6d01688\") " Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.372805 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcd7d700-640c-4b38-8589-1c6ac6d01688-utilities" (OuterVolumeSpecName: "utilities") pod "dcd7d700-640c-4b38-8589-1c6ac6d01688" (UID: "dcd7d700-640c-4b38-8589-1c6ac6d01688"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.381099 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd7d700-640c-4b38-8589-1c6ac6d01688-kube-api-access-j2tgd" (OuterVolumeSpecName: "kube-api-access-j2tgd") pod "dcd7d700-640c-4b38-8589-1c6ac6d01688" (UID: "dcd7d700-640c-4b38-8589-1c6ac6d01688"). InnerVolumeSpecName "kube-api-access-j2tgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.451675 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcd7d700-640c-4b38-8589-1c6ac6d01688-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcd7d700-640c-4b38-8589-1c6ac6d01688" (UID: "dcd7d700-640c-4b38-8589-1c6ac6d01688"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.475031 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd7d700-640c-4b38-8589-1c6ac6d01688-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.475086 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2tgd\" (UniqueName: \"kubernetes.io/projected/dcd7d700-640c-4b38-8589-1c6ac6d01688-kube-api-access-j2tgd\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.475117 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd7d700-640c-4b38-8589-1c6ac6d01688-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.597092 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l5vxr_1290053f-ebc1-4a58-963a-333751e51945/registry-server/0.log" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.605769 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l5vxr_1290053f-ebc1-4a58-963a-333751e51945/extract-utilities/0.log" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.624511 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l5vxr_1290053f-ebc1-4a58-963a-333751e51945/extract-content/0.log" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.723434 4765 generic.go:334] "Generic (PLEG): container finished" podID="dcd7d700-640c-4b38-8589-1c6ac6d01688" containerID="82214c01dba06fa223fc7a334dce886b3236ebe157b624f7fa6a903bfcb0b1e1" exitCode=0 Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.723511 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlxjq" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.723569 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlxjq" event={"ID":"dcd7d700-640c-4b38-8589-1c6ac6d01688","Type":"ContainerDied","Data":"82214c01dba06fa223fc7a334dce886b3236ebe157b624f7fa6a903bfcb0b1e1"} Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.723625 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlxjq" event={"ID":"dcd7d700-640c-4b38-8589-1c6ac6d01688","Type":"ContainerDied","Data":"cec7134a210e6395d47fc8d315c2f7ba34da4999162b9bbd31566f1dc1b5271a"} Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.723647 4765 scope.go:117] "RemoveContainer" containerID="82214c01dba06fa223fc7a334dce886b3236ebe157b624f7fa6a903bfcb0b1e1" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.754462 4765 scope.go:117] "RemoveContainer" containerID="0a257b5973c5de28e06830abb5de2df52a41e0cd33a3c662e9d01d045fc4fb94" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.800257 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xlxjq"] Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.822921 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xlxjq"] Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.830434 4765 scope.go:117] "RemoveContainer" containerID="225ffd83b8653d2f46c073a3373ed9e313031a76c41f0c0a81b8e4744f16b4eb" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.895082 4765 scope.go:117] "RemoveContainer" containerID="82214c01dba06fa223fc7a334dce886b3236ebe157b624f7fa6a903bfcb0b1e1" Jan 21 14:03:00 crc kubenswrapper[4765]: E0121 14:03:00.897331 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82214c01dba06fa223fc7a334dce886b3236ebe157b624f7fa6a903bfcb0b1e1\": container with ID starting with 82214c01dba06fa223fc7a334dce886b3236ebe157b624f7fa6a903bfcb0b1e1 not found: ID does not exist" containerID="82214c01dba06fa223fc7a334dce886b3236ebe157b624f7fa6a903bfcb0b1e1" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.897479 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82214c01dba06fa223fc7a334dce886b3236ebe157b624f7fa6a903bfcb0b1e1"} err="failed to get container status \"82214c01dba06fa223fc7a334dce886b3236ebe157b624f7fa6a903bfcb0b1e1\": rpc error: code = NotFound desc = could not find container \"82214c01dba06fa223fc7a334dce886b3236ebe157b624f7fa6a903bfcb0b1e1\": container with ID starting with 82214c01dba06fa223fc7a334dce886b3236ebe157b624f7fa6a903bfcb0b1e1 not found: ID does not exist" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.897612 4765 scope.go:117] "RemoveContainer" containerID="0a257b5973c5de28e06830abb5de2df52a41e0cd33a3c662e9d01d045fc4fb94" Jan 21 14:03:00 crc kubenswrapper[4765]: E0121 14:03:00.898718 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a257b5973c5de28e06830abb5de2df52a41e0cd33a3c662e9d01d045fc4fb94\": container with ID starting with 0a257b5973c5de28e06830abb5de2df52a41e0cd33a3c662e9d01d045fc4fb94 not found: ID does not exist" containerID="0a257b5973c5de28e06830abb5de2df52a41e0cd33a3c662e9d01d045fc4fb94" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.898751 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a257b5973c5de28e06830abb5de2df52a41e0cd33a3c662e9d01d045fc4fb94"} err="failed to get container status \"0a257b5973c5de28e06830abb5de2df52a41e0cd33a3c662e9d01d045fc4fb94\": rpc error: code = NotFound desc = could not find container \"0a257b5973c5de28e06830abb5de2df52a41e0cd33a3c662e9d01d045fc4fb94\": container with ID starting with 0a257b5973c5de28e06830abb5de2df52a41e0cd33a3c662e9d01d045fc4fb94 not found: ID does not exist" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.898773 4765 scope.go:117] "RemoveContainer" containerID="225ffd83b8653d2f46c073a3373ed9e313031a76c41f0c0a81b8e4744f16b4eb" Jan 21 14:03:00 crc kubenswrapper[4765]: E0121 14:03:00.899033 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"225ffd83b8653d2f46c073a3373ed9e313031a76c41f0c0a81b8e4744f16b4eb\": container with ID starting with 225ffd83b8653d2f46c073a3373ed9e313031a76c41f0c0a81b8e4744f16b4eb not found: ID does not exist" containerID="225ffd83b8653d2f46c073a3373ed9e313031a76c41f0c0a81b8e4744f16b4eb" Jan 21 14:03:00 crc kubenswrapper[4765]: I0121 14:03:00.899162 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"225ffd83b8653d2f46c073a3373ed9e313031a76c41f0c0a81b8e4744f16b4eb"} err="failed to get container status \"225ffd83b8653d2f46c073a3373ed9e313031a76c41f0c0a81b8e4744f16b4eb\": rpc error: code = NotFound desc = could not find container \"225ffd83b8653d2f46c073a3373ed9e313031a76c41f0c0a81b8e4744f16b4eb\": container with ID starting with 225ffd83b8653d2f46c073a3373ed9e313031a76c41f0c0a81b8e4744f16b4eb not found: ID does not exist" Jan 21 14:03:01 crc kubenswrapper[4765]: I0121 14:03:01.180810 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fskff_f231dd53-72c3-4d70-879f-d840f959c6c6/registry-server/0.log" Jan 21 14:03:01 crc kubenswrapper[4765]: I0121 14:03:01.186064 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fskff_f231dd53-72c3-4d70-879f-d840f959c6c6/extract-utilities/0.log" Jan 21 14:03:01 crc kubenswrapper[4765]: I0121 14:03:01.195351 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fskff_f231dd53-72c3-4d70-879f-d840f959c6c6/extract-content/0.log" Jan 21 14:03:01 crc kubenswrapper[4765]: I0121 14:03:01.232298 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7bhqm_ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6/marketplace-operator/0.log" Jan 21 14:03:01 crc kubenswrapper[4765]: I0121 14:03:01.484452 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n2lvd_fd0d39b7-d9c4-4e89-a696-163f5f23eb76/registry-server/0.log" Jan 21 14:03:01 crc kubenswrapper[4765]: I0121 14:03:01.490289 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n2lvd_fd0d39b7-d9c4-4e89-a696-163f5f23eb76/extract-utilities/0.log" Jan 21 14:03:01 crc kubenswrapper[4765]: I0121 14:03:01.517089 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n2lvd_fd0d39b7-d9c4-4e89-a696-163f5f23eb76/extract-content/0.log" Jan 21 14:03:01 crc kubenswrapper[4765]: I0121 14:03:01.627580 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd7d700-640c-4b38-8589-1c6ac6d01688" path="/var/lib/kubelet/pods/dcd7d700-640c-4b38-8589-1c6ac6d01688/volumes" Jan 21 14:03:02 crc kubenswrapper[4765]: I0121 14:03:02.212022 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-54n7h_807f8e51-3f5b-4702-be3f-7fe335b54522/registry-server/0.log" Jan 21 14:03:02 crc kubenswrapper[4765]: I0121 14:03:02.217847 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-54n7h_807f8e51-3f5b-4702-be3f-7fe335b54522/extract-utilities/0.log" Jan 21 14:03:02 crc kubenswrapper[4765]: I0121 14:03:02.229794 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-54n7h_807f8e51-3f5b-4702-be3f-7fe335b54522/extract-content/0.log" Jan 21 14:03:04 crc kubenswrapper[4765]: I0121 14:03:04.935994 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:03:04 crc kubenswrapper[4765]: I0121 14:03:04.937738 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:03:04 crc kubenswrapper[4765]: I0121 14:03:04.986403 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:03:05 crc kubenswrapper[4765]: I0121 14:03:05.822023 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:03:05 crc kubenswrapper[4765]: I0121 14:03:05.963645 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fmgwk"] Jan 21 14:03:07 crc kubenswrapper[4765]: I0121 14:03:07.789526 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fmgwk" podUID="7c8df9ad-a209-42ac-87fb-44a39bf09e47" containerName="registry-server" containerID="cri-o://7f29818bfff0ab6ae56089c45ade6fe07a0ec7bd0002a8cbd39df050a09aa645" gracePeriod=2 Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.336156 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.415991 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c8df9ad-a209-42ac-87fb-44a39bf09e47-utilities\") pod \"7c8df9ad-a209-42ac-87fb-44a39bf09e47\" (UID: \"7c8df9ad-a209-42ac-87fb-44a39bf09e47\") " Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.416395 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4nd9\" (UniqueName: \"kubernetes.io/projected/7c8df9ad-a209-42ac-87fb-44a39bf09e47-kube-api-access-d4nd9\") pod \"7c8df9ad-a209-42ac-87fb-44a39bf09e47\" (UID: \"7c8df9ad-a209-42ac-87fb-44a39bf09e47\") " Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.416484 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c8df9ad-a209-42ac-87fb-44a39bf09e47-catalog-content\") pod \"7c8df9ad-a209-42ac-87fb-44a39bf09e47\" (UID: \"7c8df9ad-a209-42ac-87fb-44a39bf09e47\") " Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.416775 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c8df9ad-a209-42ac-87fb-44a39bf09e47-utilities" (OuterVolumeSpecName: "utilities") pod "7c8df9ad-a209-42ac-87fb-44a39bf09e47" (UID: "7c8df9ad-a209-42ac-87fb-44a39bf09e47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.417204 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c8df9ad-a209-42ac-87fb-44a39bf09e47-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.430149 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c8df9ad-a209-42ac-87fb-44a39bf09e47-kube-api-access-d4nd9" (OuterVolumeSpecName: "kube-api-access-d4nd9") pod "7c8df9ad-a209-42ac-87fb-44a39bf09e47" (UID: "7c8df9ad-a209-42ac-87fb-44a39bf09e47"). InnerVolumeSpecName "kube-api-access-d4nd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.464868 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c8df9ad-a209-42ac-87fb-44a39bf09e47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c8df9ad-a209-42ac-87fb-44a39bf09e47" (UID: "7c8df9ad-a209-42ac-87fb-44a39bf09e47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.519314 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c8df9ad-a209-42ac-87fb-44a39bf09e47-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.519354 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4nd9\" (UniqueName: \"kubernetes.io/projected/7c8df9ad-a209-42ac-87fb-44a39bf09e47-kube-api-access-d4nd9\") on node \"crc\" DevicePath \"\"" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.799125 4765 generic.go:334] "Generic (PLEG): container finished" podID="7c8df9ad-a209-42ac-87fb-44a39bf09e47" containerID="7f29818bfff0ab6ae56089c45ade6fe07a0ec7bd0002a8cbd39df050a09aa645" exitCode=0 Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.799165 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmgwk" event={"ID":"7c8df9ad-a209-42ac-87fb-44a39bf09e47","Type":"ContainerDied","Data":"7f29818bfff0ab6ae56089c45ade6fe07a0ec7bd0002a8cbd39df050a09aa645"} Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.799194 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmgwk" event={"ID":"7c8df9ad-a209-42ac-87fb-44a39bf09e47","Type":"ContainerDied","Data":"41d5a39e85f89df9ec1b151299dcc987179b0feba97bfd151a499720c1c26f99"} Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.799201 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmgwk" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.799228 4765 scope.go:117] "RemoveContainer" containerID="7f29818bfff0ab6ae56089c45ade6fe07a0ec7bd0002a8cbd39df050a09aa645" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.819540 4765 scope.go:117] "RemoveContainer" containerID="b83bc0581f019c828ebf11a902b73e3d9da843286a50451185c97aef417a30dc" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.860334 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fmgwk"] Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.863846 4765 scope.go:117] "RemoveContainer" containerID="0d690cefa16a2e4fa0ac81e37b2f92e3c9a334304d59e2c23e7e176e5aabb17a" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.872079 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fmgwk"] Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.916748 4765 scope.go:117] "RemoveContainer" containerID="7f29818bfff0ab6ae56089c45ade6fe07a0ec7bd0002a8cbd39df050a09aa645" Jan 21 14:03:08 crc kubenswrapper[4765]: E0121 14:03:08.917164 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f29818bfff0ab6ae56089c45ade6fe07a0ec7bd0002a8cbd39df050a09aa645\": container with ID starting with 7f29818bfff0ab6ae56089c45ade6fe07a0ec7bd0002a8cbd39df050a09aa645 not found: ID does not exist" containerID="7f29818bfff0ab6ae56089c45ade6fe07a0ec7bd0002a8cbd39df050a09aa645" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.917201 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f29818bfff0ab6ae56089c45ade6fe07a0ec7bd0002a8cbd39df050a09aa645"} err="failed to get container status \"7f29818bfff0ab6ae56089c45ade6fe07a0ec7bd0002a8cbd39df050a09aa645\": rpc error: code = NotFound desc = could not find container \"7f29818bfff0ab6ae56089c45ade6fe07a0ec7bd0002a8cbd39df050a09aa645\": container with ID starting with 7f29818bfff0ab6ae56089c45ade6fe07a0ec7bd0002a8cbd39df050a09aa645 not found: ID does not exist" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.917245 4765 scope.go:117] "RemoveContainer" containerID="b83bc0581f019c828ebf11a902b73e3d9da843286a50451185c97aef417a30dc" Jan 21 14:03:08 crc kubenswrapper[4765]: E0121 14:03:08.918706 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b83bc0581f019c828ebf11a902b73e3d9da843286a50451185c97aef417a30dc\": container with ID starting with b83bc0581f019c828ebf11a902b73e3d9da843286a50451185c97aef417a30dc not found: ID does not exist" containerID="b83bc0581f019c828ebf11a902b73e3d9da843286a50451185c97aef417a30dc" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.918748 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b83bc0581f019c828ebf11a902b73e3d9da843286a50451185c97aef417a30dc"} err="failed to get container status \"b83bc0581f019c828ebf11a902b73e3d9da843286a50451185c97aef417a30dc\": rpc error: code = NotFound desc = could not find container \"b83bc0581f019c828ebf11a902b73e3d9da843286a50451185c97aef417a30dc\": container with ID starting with b83bc0581f019c828ebf11a902b73e3d9da843286a50451185c97aef417a30dc not found: ID does not exist" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.918776 4765 scope.go:117] "RemoveContainer" containerID="0d690cefa16a2e4fa0ac81e37b2f92e3c9a334304d59e2c23e7e176e5aabb17a" Jan 21 14:03:08 crc kubenswrapper[4765]: E0121 14:03:08.919131 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d690cefa16a2e4fa0ac81e37b2f92e3c9a334304d59e2c23e7e176e5aabb17a\": container with ID starting with 0d690cefa16a2e4fa0ac81e37b2f92e3c9a334304d59e2c23e7e176e5aabb17a not found: ID does not exist" containerID="0d690cefa16a2e4fa0ac81e37b2f92e3c9a334304d59e2c23e7e176e5aabb17a" Jan 21 14:03:08 crc kubenswrapper[4765]: I0121 14:03:08.919156 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d690cefa16a2e4fa0ac81e37b2f92e3c9a334304d59e2c23e7e176e5aabb17a"} err="failed to get container status \"0d690cefa16a2e4fa0ac81e37b2f92e3c9a334304d59e2c23e7e176e5aabb17a\": rpc error: code = NotFound desc = could not find container \"0d690cefa16a2e4fa0ac81e37b2f92e3c9a334304d59e2c23e7e176e5aabb17a\": container with ID starting with 0d690cefa16a2e4fa0ac81e37b2f92e3c9a334304d59e2c23e7e176e5aabb17a not found: ID does not exist" Jan 21 14:03:09 crc kubenswrapper[4765]: I0121 14:03:09.630230 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c8df9ad-a209-42ac-87fb-44a39bf09e47" path="/var/lib/kubelet/pods/7c8df9ad-a209-42ac-87fb-44a39bf09e47/volumes" Jan 21 14:03:14 crc kubenswrapper[4765]: I0121 14:03:14.446255 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:03:14 crc kubenswrapper[4765]: I0121 14:03:14.447621 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:03:44 crc kubenswrapper[4765]: I0121 14:03:44.445590 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:03:44 crc kubenswrapper[4765]: I0121 14:03:44.447109 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:04:14 crc kubenswrapper[4765]: I0121 14:04:14.445852 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:04:14 crc kubenswrapper[4765]: I0121 14:04:14.446457 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:04:14 crc kubenswrapper[4765]: I0121 14:04:14.446508 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 14:04:14 crc kubenswrapper[4765]: I0121 14:04:14.447325 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:04:14 crc kubenswrapper[4765]: I0121 14:04:14.447371 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" gracePeriod=600 Jan 21 14:04:14 crc kubenswrapper[4765]: E0121 14:04:14.569290 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:04:15 crc kubenswrapper[4765]: I0121 14:04:15.504173 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" exitCode=0 Jan 21 14:04:15 crc kubenswrapper[4765]: I0121 14:04:15.504460 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60"} Jan 21 14:04:15 crc kubenswrapper[4765]: I0121 14:04:15.504502 4765 scope.go:117] "RemoveContainer" containerID="123c2df4d0b298a94771f8fe32d86827f1ad185563334945bac4e807eabfc67b" Jan 21 14:04:15 crc kubenswrapper[4765]: I0121 14:04:15.505107 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:04:15 crc kubenswrapper[4765]: E0121 14:04:15.505347 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:04:24 crc kubenswrapper[4765]: I0121 14:04:24.488447 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skh9c_f05e7811-d30d-4f00-b816-a740a454c635/controller/0.log" Jan 21 14:04:24 crc kubenswrapper[4765]: I0121 14:04:24.495710 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skh9c_f05e7811-d30d-4f00-b816-a740a454c635/kube-rbac-proxy/0.log" Jan 21 14:04:24 crc kubenswrapper[4765]: I0121 14:04:24.509121 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-qlhwh_af902f5f-216b-41c7-b1e9-56953151dd65/frr-k8s-webhook-server/0.log" Jan 21 14:04:24 crc kubenswrapper[4765]: I0121 14:04:24.528043 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/controller/0.log" Jan 21 14:04:24 crc kubenswrapper[4765]: I0121 14:04:24.637535 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-cssjm_30c79cf6-f62c-498b-8c0b-184d3eec661f/cert-manager-controller/0.log" Jan 21 14:04:24 crc kubenswrapper[4765]: I0121 14:04:24.654317 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-7gnzb_861d65e3-bec0-4a97-9ef1-2ff8d0c660fe/cert-manager-cainjector/0.log" Jan 21 14:04:24 crc kubenswrapper[4765]: I0121 14:04:24.670692 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-gznfw_34bef5eb-722e-4dd8-b19a-ae2ec67a4c93/cert-manager-webhook/0.log" Jan 21 14:04:25 crc kubenswrapper[4765]: I0121 14:04:25.933839 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/frr/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.002193 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/extract/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.014328 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/util/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.022619 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/pull/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.125544 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-848df65fbb-79lv9_448c57b9-0176-42e1-a493-609bc853db01/manager/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.382966 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-kq85p_cd5b6743-7a2a-4d03-8adc-952fb87e6f02/manager/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.392814 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-dgbtx_079ac5a2-3654-48e8-8bf0-597018fc2ca5/manager/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.481649 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-65hfk_4c92e105-ba8b-4828-bc30-857c5431672f/manager/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.488578 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/reloader/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.494357 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/frr-metrics/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.496350 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-8pvpr_ab7eaa76-7a22-4d3c-85a3-9b643832d707/manager/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.501290 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/kube-rbac-proxy/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.509777 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/kube-rbac-proxy-frr/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.518938 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-frr-files/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.522125 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-t42c2_00c36135-159f-43be-be7c-b4f01cf2ace7/manager/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.525793 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-reloader/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.540014 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-metrics/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.577905 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c66566bf6-ls8r8_57ed60d8-a38f-47ba-b66d-6e7e557b4399/manager/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.594878 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-77844fbdcc-cgv2c_7ba871a2-babc-4cc6-a13b-4fa78e3d0580/webhook-server/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.619753 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:04:26 crc kubenswrapper[4765]: E0121 14:04:26.619957 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.872805 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-c74jr_2962f7bb-1d22-4715-b609-2eb6da1de834/manager/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.886063 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rk4x7_2a3c28ee-e170-4592-8291-db76c15675d1/manager/0.log" Jan 21 14:04:26 crc kubenswrapper[4765]: I0121 14:04:26.995714 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-hv2dn_30a8ff01-0173-45a7-9460-9df64146234d/manager/0.log" Jan 21 14:04:27 crc kubenswrapper[4765]: I0121 14:04:27.010369 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rxxvb_c78d0245-2ac0-4576-860f-20c8ad7f7fa3/manager/0.log" Jan 21 14:04:27 crc kubenswrapper[4765]: I0121 14:04:27.063487 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-8kq4g_ecd5f054-6284-485a-8c41-6b2338a5c0f4/manager/0.log" Jan 21 14:04:27 crc kubenswrapper[4765]: I0121 14:04:27.135204 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vswxq_8f59aeb8-b8fe-44bc-9e55-94eba06a676b/speaker/0.log" Jan 21 14:04:27 crc kubenswrapper[4765]: I0121 14:04:27.138626 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-r429h_bdcf568f-99c9-4432-b763-ce16903da409/manager/0.log" Jan 21 14:04:27 crc kubenswrapper[4765]: I0121 14:04:27.147555 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vswxq_8f59aeb8-b8fe-44bc-9e55-94eba06a676b/kube-rbac-proxy/0.log" Jan 21 14:04:27 crc kubenswrapper[4765]: I0121 14:04:27.202390 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-m48zr_953ef395-07f2-4b90-8232-77b94a176094/manager/0.log" Jan 21 14:04:27 crc kubenswrapper[4765]: I0121 14:04:27.211081 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-kh677_882965e2-7eb0-4971-9770-e750a8fe36dc/manager/0.log" Jan 21 14:04:27 crc kubenswrapper[4765]: I0121 14:04:27.238027 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7_246657ac-def3-41ce-bd99-a8d00d97c86b/manager/0.log" Jan 21 14:04:27 crc kubenswrapper[4765]: I0121 14:04:27.359135 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-ccbfb74b7-bm4rb_5db9c466-59ec-47fb-8643-560935c3c92c/operator/0.log" Jan 21 14:04:28 crc kubenswrapper[4765]: I0121 14:04:28.539203 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-cssjm_30c79cf6-f62c-498b-8c0b-184d3eec661f/cert-manager-controller/0.log" Jan 21 14:04:28 crc kubenswrapper[4765]: I0121 14:04:28.558813 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-7gnzb_861d65e3-bec0-4a97-9ef1-2ff8d0c660fe/cert-manager-cainjector/0.log" Jan 21 14:04:28 crc kubenswrapper[4765]: I0121 14:04:28.571928 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-gznfw_34bef5eb-722e-4dd8-b19a-ae2ec67a4c93/cert-manager-webhook/0.log" Jan 21 14:04:28 crc kubenswrapper[4765]: I0121 14:04:28.589319 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75fcf77584-5dfd7_af5f1c65-c317-4058-9d98-066b866bf83a/manager/0.log" Jan 21 14:04:28 crc kubenswrapper[4765]: I0121 14:04:28.604614 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-p9ml4_d35e26b9-ec61-4be2-b6f6-f40544f4094f/registry-server/0.log" Jan 21 14:04:28 crc kubenswrapper[4765]: I0121 14:04:28.650760 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-kvhff_17d3ffc3-5383-4beb-91d4-db120ddb1c74/manager/0.log" Jan 21 14:04:28 crc kubenswrapper[4765]: I0121 14:04:28.675057 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-97x9c_2bc79302-e5a0-4288-8b2e-ee371eb775a1/manager/0.log" Jan 21 14:04:28 crc kubenswrapper[4765]: I0121 14:04:28.693583 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ql7j4_cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99/operator/0.log" Jan 21 14:04:28 crc kubenswrapper[4765]: I0121 14:04:28.714964 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-gh9vl_c7a6160a-aef5-41af-b1cc-cc2cd97125d7/manager/0.log" Jan 21 14:04:28 crc kubenswrapper[4765]: I0121 14:04:28.774413 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-dhcgg_4c4840ab-a9b6-4243-a2f8-e21eaa84f165/manager/0.log" Jan 21 14:04:28 crc kubenswrapper[4765]: I0121 14:04:28.783823 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-s6zq8_be3fcc93-c1a3-4191-8f75-4d8aa5767593/manager/0.log" Jan 21 14:04:28 crc kubenswrapper[4765]: I0121 14:04:28.792869 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-8r9cq_2d19b122-8cf4-4b4a-8d31-037af2fd65fb/manager/0.log" Jan 21 14:04:29 crc kubenswrapper[4765]: I0121 14:04:29.302882 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-x4zpp_50ea39eb-559e-4298-9133-4d2a5c7890cb/control-plane-machine-set-operator/0.log" Jan 21 14:04:29 crc kubenswrapper[4765]: I0121 14:04:29.334336 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mnwzz_c35257f3-6d8a-4917-a956-3b71a0e54c23/kube-rbac-proxy/0.log" Jan 21 14:04:29 crc kubenswrapper[4765]: I0121 14:04:29.352710 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mnwzz_c35257f3-6d8a-4917-a956-3b71a0e54c23/machine-api-operator/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.138706 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-kgmtc_79ffb165-f80d-428c-a29e-998f1a119cd7/nmstate-console-plugin/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.161830 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lbjjz_0da8e178-dbab-4c9c-9e7a-503796386d6f/nmstate-handler/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.175311 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b2d62_7d962382-89ac-40cc-92b2-0bb0a8cecc4d/nmstate-metrics/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.183859 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b2d62_7d962382-89ac-40cc-92b2-0bb0a8cecc4d/kube-rbac-proxy/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.198896 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-fhpqb_26e746e8-47b5-4944-957d-5d43a89b207b/nmstate-operator/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.205900 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/extract/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.213789 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-lmj8n_a847c8c4-dd77-4cd8-9e06-5adb119c43fc/nmstate-webhook/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.215146 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/util/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.225376 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/pull/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.304703 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-848df65fbb-79lv9_448c57b9-0176-42e1-a493-609bc853db01/manager/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.343605 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-kq85p_cd5b6743-7a2a-4d03-8adc-952fb87e6f02/manager/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.356795 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-dgbtx_079ac5a2-3654-48e8-8bf0-597018fc2ca5/manager/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.468154 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-65hfk_4c92e105-ba8b-4828-bc30-857c5431672f/manager/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.479112 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-8pvpr_ab7eaa76-7a22-4d3c-85a3-9b643832d707/manager/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.506141 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-t42c2_00c36135-159f-43be-be7c-b4f01cf2ace7/manager/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.724849 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-c74jr_2962f7bb-1d22-4715-b609-2eb6da1de834/manager/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.735898 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rk4x7_2a3c28ee-e170-4592-8291-db76c15675d1/manager/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.797653 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-hv2dn_30a8ff01-0173-45a7-9460-9df64146234d/manager/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.806925 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rxxvb_c78d0245-2ac0-4576-860f-20c8ad7f7fa3/manager/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.841862 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-8kq4g_ecd5f054-6284-485a-8c41-6b2338a5c0f4/manager/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.888729 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-r429h_bdcf568f-99c9-4432-b763-ce16903da409/manager/0.log" Jan 21 14:04:30 crc kubenswrapper[4765]: I0121 14:04:30.994388 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-m48zr_953ef395-07f2-4b90-8232-77b94a176094/manager/0.log" Jan 21 14:04:31 crc kubenswrapper[4765]: I0121 14:04:31.003721 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-kh677_882965e2-7eb0-4971-9770-e750a8fe36dc/manager/0.log" Jan 21 14:04:31 crc kubenswrapper[4765]: I0121 14:04:31.024488 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7_246657ac-def3-41ce-bd99-a8d00d97c86b/manager/0.log" Jan 21 14:04:31 crc kubenswrapper[4765]: I0121 14:04:31.131572 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-ccbfb74b7-bm4rb_5db9c466-59ec-47fb-8643-560935c3c92c/operator/0.log" Jan 21 14:04:32 crc kubenswrapper[4765]: I0121 14:04:32.257818 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75fcf77584-5dfd7_af5f1c65-c317-4058-9d98-066b866bf83a/manager/0.log" Jan 21 14:04:32 crc kubenswrapper[4765]: I0121 14:04:32.270069 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-p9ml4_d35e26b9-ec61-4be2-b6f6-f40544f4094f/registry-server/0.log" Jan 21 14:04:32 crc kubenswrapper[4765]: I0121 14:04:32.323167 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-kvhff_17d3ffc3-5383-4beb-91d4-db120ddb1c74/manager/0.log" Jan 21 14:04:32 crc kubenswrapper[4765]: I0121 14:04:32.383276 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-97x9c_2bc79302-e5a0-4288-8b2e-ee371eb775a1/manager/0.log" Jan 21 14:04:32 crc kubenswrapper[4765]: I0121 14:04:32.404806 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ql7j4_cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99/operator/0.log" Jan 21 14:04:32 crc kubenswrapper[4765]: I0121 14:04:32.429978 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-gh9vl_c7a6160a-aef5-41af-b1cc-cc2cd97125d7/manager/0.log" Jan 21 14:04:32 crc kubenswrapper[4765]: I0121 14:04:32.489633 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-dhcgg_4c4840ab-a9b6-4243-a2f8-e21eaa84f165/manager/0.log" Jan 21 14:04:32 crc kubenswrapper[4765]: I0121 14:04:32.519311 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-s6zq8_be3fcc93-c1a3-4191-8f75-4d8aa5767593/manager/0.log" Jan 21 14:04:32 crc kubenswrapper[4765]: I0121 14:04:32.530531 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-8r9cq_2d19b122-8cf4-4b4a-8d31-037af2fd65fb/manager/0.log" Jan 21 14:04:34 crc kubenswrapper[4765]: I0121 14:04:34.575489 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z68f6_22f3d99e-f58c-4caa-be45-b879c6b614d3/kube-multus-additional-cni-plugins/0.log" Jan 21 14:04:34 crc kubenswrapper[4765]: I0121 14:04:34.584917 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z68f6_22f3d99e-f58c-4caa-be45-b879c6b614d3/egress-router-binary-copy/0.log" Jan 21 14:04:34 crc kubenswrapper[4765]: I0121 14:04:34.593296 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z68f6_22f3d99e-f58c-4caa-be45-b879c6b614d3/cni-plugins/0.log" Jan 21 14:04:34 crc kubenswrapper[4765]: I0121 14:04:34.601839 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z68f6_22f3d99e-f58c-4caa-be45-b879c6b614d3/bond-cni-plugin/0.log" Jan 21 14:04:34 crc kubenswrapper[4765]: I0121 14:04:34.610518 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z68f6_22f3d99e-f58c-4caa-be45-b879c6b614d3/routeoverride-cni/0.log" Jan 21 14:04:34 crc kubenswrapper[4765]: I0121 14:04:34.621168 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z68f6_22f3d99e-f58c-4caa-be45-b879c6b614d3/whereabouts-cni-bincopy/0.log" Jan 21 14:04:34 crc kubenswrapper[4765]: I0121 14:04:34.628586 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z68f6_22f3d99e-f58c-4caa-be45-b879c6b614d3/whereabouts-cni/0.log" Jan 21 14:04:34 crc kubenswrapper[4765]: I0121 14:04:34.655742 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-79kcs_17f0cd0d-b1e3-42d0-abde-21e830e40e5d/multus-admission-controller/0.log" Jan 21 14:04:34 crc kubenswrapper[4765]: I0121 14:04:34.661363 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-79kcs_17f0cd0d-b1e3-42d0-abde-21e830e40e5d/kube-rbac-proxy/0.log" Jan 21 14:04:34 crc kubenswrapper[4765]: I0121 14:04:34.708637 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bplfq_d9b9a5be-6b15-46d2-8715-506efdae8ae7/kube-multus/2.log" Jan 21 14:04:34 crc kubenswrapper[4765]: I0121 14:04:34.780430 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bplfq_d9b9a5be-6b15-46d2-8715-506efdae8ae7/kube-multus/3.log" Jan 21 14:04:34 crc kubenswrapper[4765]: I0121 14:04:34.824095 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-4t7jw_d8dea79f-de5c-4034-9742-c322b723a59c/network-metrics-daemon/0.log" Jan 21 14:04:34 crc kubenswrapper[4765]: I0121 14:04:34.829735 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-4t7jw_d8dea79f-de5c-4034-9742-c322b723a59c/kube-rbac-proxy/0.log" Jan 21 14:04:41 crc kubenswrapper[4765]: I0121 14:04:41.613564 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:04:41 crc kubenswrapper[4765]: E0121 14:04:41.614301 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:04:53 crc kubenswrapper[4765]: I0121 14:04:53.617552 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:04:53 crc kubenswrapper[4765]: E0121 14:04:53.618507 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:05:04 crc kubenswrapper[4765]: I0121 14:05:04.613996 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:05:04 crc kubenswrapper[4765]: E0121 14:05:04.614859 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:05:16 crc kubenswrapper[4765]: I0121 14:05:16.613748 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:05:16 crc kubenswrapper[4765]: E0121 14:05:16.614404 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:05:30 crc kubenswrapper[4765]: I0121 14:05:30.614004 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:05:30 crc kubenswrapper[4765]: E0121 14:05:30.614701 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:05:44 crc kubenswrapper[4765]: I0121 14:05:44.613979 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:05:44 crc kubenswrapper[4765]: E0121 14:05:44.615030 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:05:58 crc kubenswrapper[4765]: I0121 14:05:58.613735 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:05:58 crc kubenswrapper[4765]: E0121 14:05:58.614531 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:06:12 crc kubenswrapper[4765]: I0121 14:06:12.613953 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:06:12 crc kubenswrapper[4765]: E0121 14:06:12.614844 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:06:24 crc kubenswrapper[4765]: I0121 14:06:24.613763 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:06:24 crc kubenswrapper[4765]: E0121 14:06:24.614689 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:06:39 crc kubenswrapper[4765]: I0121 14:06:39.621310 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:06:39 crc kubenswrapper[4765]: E0121 14:06:39.621996 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:06:54 crc kubenswrapper[4765]: I0121 14:06:54.614063 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:06:54 crc kubenswrapper[4765]: E0121 14:06:54.614795 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:07:06 crc kubenswrapper[4765]: I0121 14:07:06.614504 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:07:06 crc kubenswrapper[4765]: E0121 14:07:06.615394 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:07:18 crc kubenswrapper[4765]: I0121 14:07:18.614073 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:07:18 crc kubenswrapper[4765]: E0121 14:07:18.614657 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:07:33 crc kubenswrapper[4765]: I0121 14:07:33.616673 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:07:33 crc kubenswrapper[4765]: E0121 14:07:33.618607 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:07:47 crc kubenswrapper[4765]: I0121 14:07:47.615084 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:07:47 crc kubenswrapper[4765]: E0121 14:07:47.616324 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:07:58 crc kubenswrapper[4765]: I0121 14:07:58.613852 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:07:58 crc kubenswrapper[4765]: E0121 14:07:58.614616 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:08:12 crc kubenswrapper[4765]: I0121 14:08:12.613610 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:08:12 crc kubenswrapper[4765]: E0121 14:08:12.615658 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:08:23 crc kubenswrapper[4765]: I0121 14:08:23.617231 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:08:23 crc kubenswrapper[4765]: E0121 14:08:23.618001 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:08:37 crc kubenswrapper[4765]: I0121 14:08:37.614677 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:08:37 crc kubenswrapper[4765]: E0121 14:08:37.615388 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:08:38 crc kubenswrapper[4765]: I0121 14:08:38.806452 4765 scope.go:117] "RemoveContainer" containerID="a85524050ca98a34d7f437eafd328cd7d181cd4e2e07191805caf0538b6ebfae" Jan 21 14:08:40 crc kubenswrapper[4765]: I0121 14:08:40.780637 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tjbhz"] Jan 21 14:08:40 crc kubenswrapper[4765]: E0121 14:08:40.783012 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd7d700-640c-4b38-8589-1c6ac6d01688" containerName="extract-content" Jan 21 14:08:40 crc kubenswrapper[4765]: I0121 14:08:40.783034 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd7d700-640c-4b38-8589-1c6ac6d01688" containerName="extract-content" Jan 21 14:08:40 crc kubenswrapper[4765]: E0121 14:08:40.783069 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd7d700-640c-4b38-8589-1c6ac6d01688" containerName="extract-utilities" Jan 21 14:08:40 crc kubenswrapper[4765]: I0121 14:08:40.783077 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd7d700-640c-4b38-8589-1c6ac6d01688" containerName="extract-utilities" Jan 21 14:08:40 crc kubenswrapper[4765]: E0121 14:08:40.783090 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8df9ad-a209-42ac-87fb-44a39bf09e47" containerName="extract-content" Jan 21 14:08:40 crc kubenswrapper[4765]: I0121 14:08:40.783097 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8df9ad-a209-42ac-87fb-44a39bf09e47" containerName="extract-content" Jan 21 14:08:40 crc kubenswrapper[4765]: E0121 14:08:40.783107 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd7d700-640c-4b38-8589-1c6ac6d01688" containerName="registry-server" Jan 21 14:08:40 crc kubenswrapper[4765]: I0121 14:08:40.783113 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd7d700-640c-4b38-8589-1c6ac6d01688" containerName="registry-server" Jan 21 14:08:40 crc kubenswrapper[4765]: E0121 14:08:40.783121 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8df9ad-a209-42ac-87fb-44a39bf09e47" containerName="extract-utilities" Jan 21 14:08:40 crc kubenswrapper[4765]: I0121 14:08:40.783126 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8df9ad-a209-42ac-87fb-44a39bf09e47" containerName="extract-utilities" Jan 21 14:08:40 crc kubenswrapper[4765]: E0121 14:08:40.783136 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8df9ad-a209-42ac-87fb-44a39bf09e47" containerName="registry-server" Jan 21 14:08:40 crc kubenswrapper[4765]: I0121 14:08:40.783142 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8df9ad-a209-42ac-87fb-44a39bf09e47" containerName="registry-server" Jan 21 14:08:40 crc kubenswrapper[4765]: I0121 14:08:40.783317 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8df9ad-a209-42ac-87fb-44a39bf09e47" containerName="registry-server" Jan 21 14:08:40 crc kubenswrapper[4765]: I0121 14:08:40.783339 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd7d700-640c-4b38-8589-1c6ac6d01688" containerName="registry-server" Jan 21 14:08:40 crc kubenswrapper[4765]: I0121 14:08:40.784654 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:08:40 crc kubenswrapper[4765]: I0121 14:08:40.807675 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tjbhz"] Jan 21 14:08:40 crc kubenswrapper[4765]: I0121 14:08:40.916946 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhlld\" (UniqueName: \"kubernetes.io/projected/6d5da088-ee53-4c6f-81a6-585d214288bb-kube-api-access-vhlld\") pod \"redhat-operators-tjbhz\" (UID: \"6d5da088-ee53-4c6f-81a6-585d214288bb\") " pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:08:40 crc kubenswrapper[4765]: I0121 14:08:40.917151 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5da088-ee53-4c6f-81a6-585d214288bb-utilities\") pod \"redhat-operators-tjbhz\" (UID: \"6d5da088-ee53-4c6f-81a6-585d214288bb\") " pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:08:40 crc kubenswrapper[4765]: I0121 14:08:40.917715 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5da088-ee53-4c6f-81a6-585d214288bb-catalog-content\") pod \"redhat-operators-tjbhz\" (UID: \"6d5da088-ee53-4c6f-81a6-585d214288bb\") " pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:08:41 crc kubenswrapper[4765]: I0121 14:08:41.019922 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5da088-ee53-4c6f-81a6-585d214288bb-utilities\") pod \"redhat-operators-tjbhz\" (UID: \"6d5da088-ee53-4c6f-81a6-585d214288bb\") " pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:08:41 crc kubenswrapper[4765]: I0121 14:08:41.020533 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5da088-ee53-4c6f-81a6-585d214288bb-catalog-content\") pod \"redhat-operators-tjbhz\" (UID: \"6d5da088-ee53-4c6f-81a6-585d214288bb\") " pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:08:41 crc kubenswrapper[4765]: I0121 14:08:41.020556 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5da088-ee53-4c6f-81a6-585d214288bb-utilities\") pod \"redhat-operators-tjbhz\" (UID: \"6d5da088-ee53-4c6f-81a6-585d214288bb\") " pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:08:41 crc kubenswrapper[4765]: I0121 14:08:41.020689 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhlld\" (UniqueName: \"kubernetes.io/projected/6d5da088-ee53-4c6f-81a6-585d214288bb-kube-api-access-vhlld\") pod \"redhat-operators-tjbhz\" (UID: \"6d5da088-ee53-4c6f-81a6-585d214288bb\") " pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:08:41 crc kubenswrapper[4765]: I0121 14:08:41.021102 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5da088-ee53-4c6f-81a6-585d214288bb-catalog-content\") pod \"redhat-operators-tjbhz\" (UID: \"6d5da088-ee53-4c6f-81a6-585d214288bb\") " pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:08:41 crc kubenswrapper[4765]: I0121 14:08:41.044631 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhlld\" (UniqueName: \"kubernetes.io/projected/6d5da088-ee53-4c6f-81a6-585d214288bb-kube-api-access-vhlld\") pod \"redhat-operators-tjbhz\" (UID: \"6d5da088-ee53-4c6f-81a6-585d214288bb\") " pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:08:41 crc kubenswrapper[4765]: I0121 14:08:41.117481 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:08:41 crc kubenswrapper[4765]: I0121 14:08:41.465927 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tjbhz"] Jan 21 14:08:41 crc kubenswrapper[4765]: W0121 14:08:41.488975 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d5da088_ee53_4c6f_81a6_585d214288bb.slice/crio-a41af2b440d16ab9fda03f446148310b5b6e245f90b4bfd88f07e42fa6231730 WatchSource:0}: Error finding container a41af2b440d16ab9fda03f446148310b5b6e245f90b4bfd88f07e42fa6231730: Status 404 returned error can't find the container with id a41af2b440d16ab9fda03f446148310b5b6e245f90b4bfd88f07e42fa6231730 Jan 21 14:08:42 crc kubenswrapper[4765]: I0121 14:08:42.408418 4765 generic.go:334] "Generic (PLEG): container finished" podID="6d5da088-ee53-4c6f-81a6-585d214288bb" containerID="28d26597cd05a4c799c64850773b23c3ce9a17e0b6ef81c030fc91634bda5c13" exitCode=0 Jan 21 14:08:42 crc kubenswrapper[4765]: I0121 14:08:42.408504 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjbhz" event={"ID":"6d5da088-ee53-4c6f-81a6-585d214288bb","Type":"ContainerDied","Data":"28d26597cd05a4c799c64850773b23c3ce9a17e0b6ef81c030fc91634bda5c13"} Jan 21 14:08:42 crc kubenswrapper[4765]: I0121 14:08:42.408703 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjbhz" event={"ID":"6d5da088-ee53-4c6f-81a6-585d214288bb","Type":"ContainerStarted","Data":"a41af2b440d16ab9fda03f446148310b5b6e245f90b4bfd88f07e42fa6231730"} Jan 21 14:08:42 crc kubenswrapper[4765]: I0121 14:08:42.410455 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:08:44 crc kubenswrapper[4765]: I0121 14:08:44.427528 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjbhz" event={"ID":"6d5da088-ee53-4c6f-81a6-585d214288bb","Type":"ContainerStarted","Data":"586ae2ae609878c17789ed579a430a4b7fe076dc16739daf4b16f13b576ae3fc"} Jan 21 14:08:48 crc kubenswrapper[4765]: I0121 14:08:48.463263 4765 generic.go:334] "Generic (PLEG): container finished" podID="6d5da088-ee53-4c6f-81a6-585d214288bb" containerID="586ae2ae609878c17789ed579a430a4b7fe076dc16739daf4b16f13b576ae3fc" exitCode=0 Jan 21 14:08:48 crc kubenswrapper[4765]: I0121 14:08:48.463325 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjbhz" event={"ID":"6d5da088-ee53-4c6f-81a6-585d214288bb","Type":"ContainerDied","Data":"586ae2ae609878c17789ed579a430a4b7fe076dc16739daf4b16f13b576ae3fc"} Jan 21 14:08:49 crc kubenswrapper[4765]: I0121 14:08:49.476805 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjbhz" event={"ID":"6d5da088-ee53-4c6f-81a6-585d214288bb","Type":"ContainerStarted","Data":"875eaf56319f2bf243ee18d805f870044b289fdea83e609746ed766d28563057"} Jan 21 14:08:49 crc kubenswrapper[4765]: I0121 14:08:49.507968 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tjbhz" podStartSLOduration=2.9046810990000003 podStartE2EDuration="9.50794742s" podCreationTimestamp="2026-01-21 14:08:40 +0000 UTC" firstStartedPulling="2026-01-21 14:08:42.410251341 +0000 UTC m=+3983.427977163" lastFinishedPulling="2026-01-21 14:08:49.013517662 +0000 UTC m=+3990.031243484" observedRunningTime="2026-01-21 14:08:49.500997393 +0000 UTC m=+3990.518723215" watchObservedRunningTime="2026-01-21 14:08:49.50794742 +0000 UTC m=+3990.525673252" Jan 21 14:08:51 crc kubenswrapper[4765]: I0121 14:08:51.118164 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:08:51 crc kubenswrapper[4765]: I0121 14:08:51.118520 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:08:51 crc kubenswrapper[4765]: I0121 14:08:51.618967 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:08:51 crc kubenswrapper[4765]: E0121 14:08:51.619516 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:08:52 crc kubenswrapper[4765]: I0121 14:08:52.172537 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tjbhz" podUID="6d5da088-ee53-4c6f-81a6-585d214288bb" containerName="registry-server" probeResult="failure" output=< Jan 21 14:08:52 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Jan 21 14:08:52 crc kubenswrapper[4765]: > Jan 21 14:09:01 crc kubenswrapper[4765]: I0121 14:09:01.216739 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:09:01 crc kubenswrapper[4765]: I0121 14:09:01.300963 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:09:01 crc kubenswrapper[4765]: I0121 14:09:01.473051 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tjbhz"] Jan 21 14:09:02 crc kubenswrapper[4765]: I0121 14:09:02.609369 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tjbhz" podUID="6d5da088-ee53-4c6f-81a6-585d214288bb" containerName="registry-server" containerID="cri-o://875eaf56319f2bf243ee18d805f870044b289fdea83e609746ed766d28563057" gracePeriod=2 Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.257952 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.396892 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5da088-ee53-4c6f-81a6-585d214288bb-catalog-content\") pod \"6d5da088-ee53-4c6f-81a6-585d214288bb\" (UID: \"6d5da088-ee53-4c6f-81a6-585d214288bb\") " Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.396956 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhlld\" (UniqueName: \"kubernetes.io/projected/6d5da088-ee53-4c6f-81a6-585d214288bb-kube-api-access-vhlld\") pod \"6d5da088-ee53-4c6f-81a6-585d214288bb\" (UID: \"6d5da088-ee53-4c6f-81a6-585d214288bb\") " Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.397028 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5da088-ee53-4c6f-81a6-585d214288bb-utilities\") pod \"6d5da088-ee53-4c6f-81a6-585d214288bb\" (UID: \"6d5da088-ee53-4c6f-81a6-585d214288bb\") " Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.397881 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d5da088-ee53-4c6f-81a6-585d214288bb-utilities" (OuterVolumeSpecName: "utilities") pod "6d5da088-ee53-4c6f-81a6-585d214288bb" (UID: "6d5da088-ee53-4c6f-81a6-585d214288bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.403448 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d5da088-ee53-4c6f-81a6-585d214288bb-kube-api-access-vhlld" (OuterVolumeSpecName: "kube-api-access-vhlld") pod "6d5da088-ee53-4c6f-81a6-585d214288bb" (UID: "6d5da088-ee53-4c6f-81a6-585d214288bb"). InnerVolumeSpecName "kube-api-access-vhlld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.499712 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhlld\" (UniqueName: \"kubernetes.io/projected/6d5da088-ee53-4c6f-81a6-585d214288bb-kube-api-access-vhlld\") on node \"crc\" DevicePath \"\"" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.499747 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d5da088-ee53-4c6f-81a6-585d214288bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.527388 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d5da088-ee53-4c6f-81a6-585d214288bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d5da088-ee53-4c6f-81a6-585d214288bb" (UID: "6d5da088-ee53-4c6f-81a6-585d214288bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.601703 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d5da088-ee53-4c6f-81a6-585d214288bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.614656 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:09:03 crc kubenswrapper[4765]: E0121 14:09:03.615946 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.621319 4765 generic.go:334] "Generic (PLEG): container finished" podID="6d5da088-ee53-4c6f-81a6-585d214288bb" containerID="875eaf56319f2bf243ee18d805f870044b289fdea83e609746ed766d28563057" exitCode=0 Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.621420 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tjbhz" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.625428 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjbhz" event={"ID":"6d5da088-ee53-4c6f-81a6-585d214288bb","Type":"ContainerDied","Data":"875eaf56319f2bf243ee18d805f870044b289fdea83e609746ed766d28563057"} Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.625467 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjbhz" event={"ID":"6d5da088-ee53-4c6f-81a6-585d214288bb","Type":"ContainerDied","Data":"a41af2b440d16ab9fda03f446148310b5b6e245f90b4bfd88f07e42fa6231730"} Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.625483 4765 scope.go:117] "RemoveContainer" containerID="875eaf56319f2bf243ee18d805f870044b289fdea83e609746ed766d28563057" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.660140 4765 scope.go:117] "RemoveContainer" containerID="586ae2ae609878c17789ed579a430a4b7fe076dc16739daf4b16f13b576ae3fc" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.677520 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tjbhz"] Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.681085 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tjbhz"] Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.705970 4765 scope.go:117] "RemoveContainer" containerID="28d26597cd05a4c799c64850773b23c3ce9a17e0b6ef81c030fc91634bda5c13" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.726255 4765 scope.go:117] "RemoveContainer" containerID="875eaf56319f2bf243ee18d805f870044b289fdea83e609746ed766d28563057" Jan 21 14:09:03 crc kubenswrapper[4765]: E0121 14:09:03.726684 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"875eaf56319f2bf243ee18d805f870044b289fdea83e609746ed766d28563057\": container with ID starting with 875eaf56319f2bf243ee18d805f870044b289fdea83e609746ed766d28563057 not found: ID does not exist" containerID="875eaf56319f2bf243ee18d805f870044b289fdea83e609746ed766d28563057" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.726724 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"875eaf56319f2bf243ee18d805f870044b289fdea83e609746ed766d28563057"} err="failed to get container status \"875eaf56319f2bf243ee18d805f870044b289fdea83e609746ed766d28563057\": rpc error: code = NotFound desc = could not find container \"875eaf56319f2bf243ee18d805f870044b289fdea83e609746ed766d28563057\": container with ID starting with 875eaf56319f2bf243ee18d805f870044b289fdea83e609746ed766d28563057 not found: ID does not exist" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.726750 4765 scope.go:117] "RemoveContainer" containerID="586ae2ae609878c17789ed579a430a4b7fe076dc16739daf4b16f13b576ae3fc" Jan 21 14:09:03 crc kubenswrapper[4765]: E0121 14:09:03.727044 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"586ae2ae609878c17789ed579a430a4b7fe076dc16739daf4b16f13b576ae3fc\": container with ID starting with 586ae2ae609878c17789ed579a430a4b7fe076dc16739daf4b16f13b576ae3fc not found: ID does not exist" containerID="586ae2ae609878c17789ed579a430a4b7fe076dc16739daf4b16f13b576ae3fc" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.727068 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"586ae2ae609878c17789ed579a430a4b7fe076dc16739daf4b16f13b576ae3fc"} err="failed to get container status \"586ae2ae609878c17789ed579a430a4b7fe076dc16739daf4b16f13b576ae3fc\": rpc error: code = NotFound desc = could not find container \"586ae2ae609878c17789ed579a430a4b7fe076dc16739daf4b16f13b576ae3fc\": container with ID starting with 586ae2ae609878c17789ed579a430a4b7fe076dc16739daf4b16f13b576ae3fc not found: ID does not exist" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.727082 4765 scope.go:117] "RemoveContainer" containerID="28d26597cd05a4c799c64850773b23c3ce9a17e0b6ef81c030fc91634bda5c13" Jan 21 14:09:03 crc kubenswrapper[4765]: E0121 14:09:03.727347 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28d26597cd05a4c799c64850773b23c3ce9a17e0b6ef81c030fc91634bda5c13\": container with ID starting with 28d26597cd05a4c799c64850773b23c3ce9a17e0b6ef81c030fc91634bda5c13 not found: ID does not exist" containerID="28d26597cd05a4c799c64850773b23c3ce9a17e0b6ef81c030fc91634bda5c13" Jan 21 14:09:03 crc kubenswrapper[4765]: I0121 14:09:03.727377 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d26597cd05a4c799c64850773b23c3ce9a17e0b6ef81c030fc91634bda5c13"} err="failed to get container status \"28d26597cd05a4c799c64850773b23c3ce9a17e0b6ef81c030fc91634bda5c13\": rpc error: code = NotFound desc = could not find container \"28d26597cd05a4c799c64850773b23c3ce9a17e0b6ef81c030fc91634bda5c13\": container with ID starting with 28d26597cd05a4c799c64850773b23c3ce9a17e0b6ef81c030fc91634bda5c13 not found: ID does not exist" Jan 21 14:09:05 crc kubenswrapper[4765]: I0121 14:09:05.625961 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d5da088-ee53-4c6f-81a6-585d214288bb" path="/var/lib/kubelet/pods/6d5da088-ee53-4c6f-81a6-585d214288bb/volumes" Jan 21 14:09:15 crc kubenswrapper[4765]: I0121 14:09:15.619795 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:09:16 crc kubenswrapper[4765]: I0121 14:09:16.772796 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"2ca0f5d9400cc961af6025381ed82b34f4e78bfb4f3b3d8562479cb007ef5b63"} Jan 21 14:11:38 crc kubenswrapper[4765]: I0121 14:11:38.114865 4765 generic.go:334] "Generic (PLEG): container finished" podID="f2dc91b5-0f41-4899-90c9-e0dcab80e4d8" containerID="e311900b3468a4f0f64592bf9989a203a47d4c97b1df9c61af96d9f3cc861dc8" exitCode=0 Jan 21 14:11:38 crc kubenswrapper[4765]: I0121 14:11:38.114966 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fjdnn/must-gather-k5mtw" event={"ID":"f2dc91b5-0f41-4899-90c9-e0dcab80e4d8","Type":"ContainerDied","Data":"e311900b3468a4f0f64592bf9989a203a47d4c97b1df9c61af96d9f3cc861dc8"} Jan 21 14:11:38 crc kubenswrapper[4765]: I0121 14:11:38.116001 4765 scope.go:117] "RemoveContainer" containerID="e311900b3468a4f0f64592bf9989a203a47d4c97b1df9c61af96d9f3cc861dc8" Jan 21 14:11:39 crc kubenswrapper[4765]: I0121 14:11:39.113078 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fjdnn_must-gather-k5mtw_f2dc91b5-0f41-4899-90c9-e0dcab80e4d8/gather/0.log" Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.755659 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xblm9"] Jan 21 14:11:41 crc kubenswrapper[4765]: E0121 14:11:41.756750 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5da088-ee53-4c6f-81a6-585d214288bb" containerName="extract-content" Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.756767 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5da088-ee53-4c6f-81a6-585d214288bb" containerName="extract-content" Jan 21 14:11:41 crc kubenswrapper[4765]: E0121 14:11:41.756786 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5da088-ee53-4c6f-81a6-585d214288bb" containerName="extract-utilities" Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.756798 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5da088-ee53-4c6f-81a6-585d214288bb" containerName="extract-utilities" Jan 21 14:11:41 crc kubenswrapper[4765]: E0121 14:11:41.756814 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d5da088-ee53-4c6f-81a6-585d214288bb" containerName="registry-server" Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.756823 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d5da088-ee53-4c6f-81a6-585d214288bb" containerName="registry-server" Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.757040 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d5da088-ee53-4c6f-81a6-585d214288bb" containerName="registry-server" Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.762555 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.768027 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xblm9"] Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.813093 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svfq5\" (UniqueName: \"kubernetes.io/projected/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-kube-api-access-svfq5\") pod \"redhat-marketplace-xblm9\" (UID: \"3a347ae7-e674-45b7-9e93-ddecdb4a0cad\") " pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.813461 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-catalog-content\") pod \"redhat-marketplace-xblm9\" (UID: \"3a347ae7-e674-45b7-9e93-ddecdb4a0cad\") " pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.813499 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-utilities\") pod \"redhat-marketplace-xblm9\" (UID: \"3a347ae7-e674-45b7-9e93-ddecdb4a0cad\") " pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.915393 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-catalog-content\") pod \"redhat-marketplace-xblm9\" (UID: \"3a347ae7-e674-45b7-9e93-ddecdb4a0cad\") " pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.915455 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-utilities\") pod \"redhat-marketplace-xblm9\" (UID: \"3a347ae7-e674-45b7-9e93-ddecdb4a0cad\") " pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.915494 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svfq5\" (UniqueName: \"kubernetes.io/projected/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-kube-api-access-svfq5\") pod \"redhat-marketplace-xblm9\" (UID: \"3a347ae7-e674-45b7-9e93-ddecdb4a0cad\") " pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.915920 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-catalog-content\") pod \"redhat-marketplace-xblm9\" (UID: \"3a347ae7-e674-45b7-9e93-ddecdb4a0cad\") " pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.916257 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-utilities\") pod \"redhat-marketplace-xblm9\" (UID: \"3a347ae7-e674-45b7-9e93-ddecdb4a0cad\") " pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:41 crc kubenswrapper[4765]: I0121 14:11:41.935291 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svfq5\" (UniqueName: \"kubernetes.io/projected/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-kube-api-access-svfq5\") pod \"redhat-marketplace-xblm9\" (UID: \"3a347ae7-e674-45b7-9e93-ddecdb4a0cad\") " pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:42 crc kubenswrapper[4765]: I0121 14:11:42.088945 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:42 crc kubenswrapper[4765]: I0121 14:11:42.613589 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xblm9"] Jan 21 14:11:43 crc kubenswrapper[4765]: I0121 14:11:43.175092 4765 generic.go:334] "Generic (PLEG): container finished" podID="3a347ae7-e674-45b7-9e93-ddecdb4a0cad" containerID="d0e1e19c7af1437c7a08203fd1d541dc998bf10eca142e19a819b647b0923600" exitCode=0 Jan 21 14:11:43 crc kubenswrapper[4765]: I0121 14:11:43.175337 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xblm9" event={"ID":"3a347ae7-e674-45b7-9e93-ddecdb4a0cad","Type":"ContainerDied","Data":"d0e1e19c7af1437c7a08203fd1d541dc998bf10eca142e19a819b647b0923600"} Jan 21 14:11:43 crc kubenswrapper[4765]: I0121 14:11:43.175477 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xblm9" event={"ID":"3a347ae7-e674-45b7-9e93-ddecdb4a0cad","Type":"ContainerStarted","Data":"5bcfb3450714082e0b9405c16f53c1b51bce3a7403df4c6f6d2618359648a59e"} Jan 21 14:11:44 crc kubenswrapper[4765]: I0121 14:11:44.204119 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xblm9" event={"ID":"3a347ae7-e674-45b7-9e93-ddecdb4a0cad","Type":"ContainerStarted","Data":"0d021ed48688e0284e6c07a5e221cae1369598a0cd6a1b5c2543c7043ae6be40"} Jan 21 14:11:44 crc kubenswrapper[4765]: I0121 14:11:44.445738 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:11:44 crc kubenswrapper[4765]: I0121 14:11:44.445831 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:11:45 crc kubenswrapper[4765]: I0121 14:11:45.214174 4765 generic.go:334] "Generic (PLEG): container finished" podID="3a347ae7-e674-45b7-9e93-ddecdb4a0cad" containerID="0d021ed48688e0284e6c07a5e221cae1369598a0cd6a1b5c2543c7043ae6be40" exitCode=0 Jan 21 14:11:45 crc kubenswrapper[4765]: I0121 14:11:45.214515 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xblm9" event={"ID":"3a347ae7-e674-45b7-9e93-ddecdb4a0cad","Type":"ContainerDied","Data":"0d021ed48688e0284e6c07a5e221cae1369598a0cd6a1b5c2543c7043ae6be40"} Jan 21 14:11:46 crc kubenswrapper[4765]: I0121 14:11:46.224372 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xblm9" event={"ID":"3a347ae7-e674-45b7-9e93-ddecdb4a0cad","Type":"ContainerStarted","Data":"7c9c784863fc93deefc7043b676843810cf891f254f47ebef7c43bc14d90a471"} Jan 21 14:11:46 crc kubenswrapper[4765]: I0121 14:11:46.266659 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xblm9" podStartSLOduration=2.836136551 podStartE2EDuration="5.266633602s" podCreationTimestamp="2026-01-21 14:11:41 +0000 UTC" firstStartedPulling="2026-01-21 14:11:43.179269447 +0000 UTC m=+4164.196995269" lastFinishedPulling="2026-01-21 14:11:45.609766478 +0000 UTC m=+4166.627492320" observedRunningTime="2026-01-21 14:11:46.255310081 +0000 UTC m=+4167.273035923" watchObservedRunningTime="2026-01-21 14:11:46.266633602 +0000 UTC m=+4167.284359424" Jan 21 14:11:47 crc kubenswrapper[4765]: I0121 14:11:47.672152 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fjdnn/must-gather-k5mtw"] Jan 21 14:11:47 crc kubenswrapper[4765]: I0121 14:11:47.672774 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-fjdnn/must-gather-k5mtw" podUID="f2dc91b5-0f41-4899-90c9-e0dcab80e4d8" containerName="copy" containerID="cri-o://17a810fdacc021ad0ab0645d1207a65f40c0a14c2de5221844979f563062fc03" gracePeriod=2 Jan 21 14:11:47 crc kubenswrapper[4765]: I0121 14:11:47.680441 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fjdnn/must-gather-k5mtw"] Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.165088 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fjdnn_must-gather-k5mtw_f2dc91b5-0f41-4899-90c9-e0dcab80e4d8/copy/0.log" Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.165587 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/must-gather-k5mtw" Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.241892 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fjdnn_must-gather-k5mtw_f2dc91b5-0f41-4899-90c9-e0dcab80e4d8/copy/0.log" Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.242236 4765 generic.go:334] "Generic (PLEG): container finished" podID="f2dc91b5-0f41-4899-90c9-e0dcab80e4d8" containerID="17a810fdacc021ad0ab0645d1207a65f40c0a14c2de5221844979f563062fc03" exitCode=143 Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.242280 4765 scope.go:117] "RemoveContainer" containerID="17a810fdacc021ad0ab0645d1207a65f40c0a14c2de5221844979f563062fc03" Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.242410 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fjdnn/must-gather-k5mtw" Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.251420 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2dc91b5-0f41-4899-90c9-e0dcab80e4d8-must-gather-output\") pod \"f2dc91b5-0f41-4899-90c9-e0dcab80e4d8\" (UID: \"f2dc91b5-0f41-4899-90c9-e0dcab80e4d8\") " Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.251503 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5sm\" (UniqueName: \"kubernetes.io/projected/f2dc91b5-0f41-4899-90c9-e0dcab80e4d8-kube-api-access-qg5sm\") pod \"f2dc91b5-0f41-4899-90c9-e0dcab80e4d8\" (UID: \"f2dc91b5-0f41-4899-90c9-e0dcab80e4d8\") " Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.260921 4765 scope.go:117] "RemoveContainer" containerID="e311900b3468a4f0f64592bf9989a203a47d4c97b1df9c61af96d9f3cc861dc8" Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.268658 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2dc91b5-0f41-4899-90c9-e0dcab80e4d8-kube-api-access-qg5sm" (OuterVolumeSpecName: "kube-api-access-qg5sm") pod "f2dc91b5-0f41-4899-90c9-e0dcab80e4d8" (UID: "f2dc91b5-0f41-4899-90c9-e0dcab80e4d8"). InnerVolumeSpecName "kube-api-access-qg5sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.335459 4765 scope.go:117] "RemoveContainer" containerID="17a810fdacc021ad0ab0645d1207a65f40c0a14c2de5221844979f563062fc03" Jan 21 14:11:48 crc kubenswrapper[4765]: E0121 14:11:48.336631 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17a810fdacc021ad0ab0645d1207a65f40c0a14c2de5221844979f563062fc03\": container with ID starting with 17a810fdacc021ad0ab0645d1207a65f40c0a14c2de5221844979f563062fc03 not found: ID does not exist" containerID="17a810fdacc021ad0ab0645d1207a65f40c0a14c2de5221844979f563062fc03" Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.336677 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17a810fdacc021ad0ab0645d1207a65f40c0a14c2de5221844979f563062fc03"} err="failed to get container status \"17a810fdacc021ad0ab0645d1207a65f40c0a14c2de5221844979f563062fc03\": rpc error: code = NotFound desc = could not find container \"17a810fdacc021ad0ab0645d1207a65f40c0a14c2de5221844979f563062fc03\": container with ID starting with 17a810fdacc021ad0ab0645d1207a65f40c0a14c2de5221844979f563062fc03 not found: ID does not exist" Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.336708 4765 scope.go:117] "RemoveContainer" containerID="e311900b3468a4f0f64592bf9989a203a47d4c97b1df9c61af96d9f3cc861dc8" Jan 21 14:11:48 crc kubenswrapper[4765]: E0121 14:11:48.337675 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e311900b3468a4f0f64592bf9989a203a47d4c97b1df9c61af96d9f3cc861dc8\": container with ID starting with e311900b3468a4f0f64592bf9989a203a47d4c97b1df9c61af96d9f3cc861dc8 not found: ID does not exist" containerID="e311900b3468a4f0f64592bf9989a203a47d4c97b1df9c61af96d9f3cc861dc8" Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.337834 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e311900b3468a4f0f64592bf9989a203a47d4c97b1df9c61af96d9f3cc861dc8"} err="failed to get container status \"e311900b3468a4f0f64592bf9989a203a47d4c97b1df9c61af96d9f3cc861dc8\": rpc error: code = NotFound desc = could not find container \"e311900b3468a4f0f64592bf9989a203a47d4c97b1df9c61af96d9f3cc861dc8\": container with ID starting with e311900b3468a4f0f64592bf9989a203a47d4c97b1df9c61af96d9f3cc861dc8 not found: ID does not exist" Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.354010 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5sm\" (UniqueName: \"kubernetes.io/projected/f2dc91b5-0f41-4899-90c9-e0dcab80e4d8-kube-api-access-qg5sm\") on node \"crc\" DevicePath \"\"" Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.427784 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2dc91b5-0f41-4899-90c9-e0dcab80e4d8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f2dc91b5-0f41-4899-90c9-e0dcab80e4d8" (UID: "f2dc91b5-0f41-4899-90c9-e0dcab80e4d8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:11:48 crc kubenswrapper[4765]: I0121 14:11:48.456166 4765 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2dc91b5-0f41-4899-90c9-e0dcab80e4d8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 14:11:49 crc kubenswrapper[4765]: I0121 14:11:49.627396 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2dc91b5-0f41-4899-90c9-e0dcab80e4d8" path="/var/lib/kubelet/pods/f2dc91b5-0f41-4899-90c9-e0dcab80e4d8/volumes" Jan 21 14:11:52 crc kubenswrapper[4765]: I0121 14:11:52.089518 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:52 crc kubenswrapper[4765]: I0121 14:11:52.089848 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:52 crc kubenswrapper[4765]: I0121 14:11:52.162109 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:52 crc kubenswrapper[4765]: I0121 14:11:52.336521 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:52 crc kubenswrapper[4765]: I0121 14:11:52.405038 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xblm9"] Jan 21 14:11:54 crc kubenswrapper[4765]: I0121 14:11:54.291746 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xblm9" podUID="3a347ae7-e674-45b7-9e93-ddecdb4a0cad" containerName="registry-server" containerID="cri-o://7c9c784863fc93deefc7043b676843810cf891f254f47ebef7c43bc14d90a471" gracePeriod=2 Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.030353 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.108331 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svfq5\" (UniqueName: \"kubernetes.io/projected/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-kube-api-access-svfq5\") pod \"3a347ae7-e674-45b7-9e93-ddecdb4a0cad\" (UID: \"3a347ae7-e674-45b7-9e93-ddecdb4a0cad\") " Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.108514 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-utilities\") pod \"3a347ae7-e674-45b7-9e93-ddecdb4a0cad\" (UID: \"3a347ae7-e674-45b7-9e93-ddecdb4a0cad\") " Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.108650 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-catalog-content\") pod \"3a347ae7-e674-45b7-9e93-ddecdb4a0cad\" (UID: \"3a347ae7-e674-45b7-9e93-ddecdb4a0cad\") " Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.109554 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-utilities" (OuterVolumeSpecName: "utilities") pod "3a347ae7-e674-45b7-9e93-ddecdb4a0cad" (UID: "3a347ae7-e674-45b7-9e93-ddecdb4a0cad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.117550 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-kube-api-access-svfq5" (OuterVolumeSpecName: "kube-api-access-svfq5") pod "3a347ae7-e674-45b7-9e93-ddecdb4a0cad" (UID: "3a347ae7-e674-45b7-9e93-ddecdb4a0cad"). InnerVolumeSpecName "kube-api-access-svfq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.138919 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a347ae7-e674-45b7-9e93-ddecdb4a0cad" (UID: "3a347ae7-e674-45b7-9e93-ddecdb4a0cad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.211079 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.211113 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svfq5\" (UniqueName: \"kubernetes.io/projected/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-kube-api-access-svfq5\") on node \"crc\" DevicePath \"\"" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.211124 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a347ae7-e674-45b7-9e93-ddecdb4a0cad-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.305293 4765 generic.go:334] "Generic (PLEG): container finished" podID="3a347ae7-e674-45b7-9e93-ddecdb4a0cad" containerID="7c9c784863fc93deefc7043b676843810cf891f254f47ebef7c43bc14d90a471" exitCode=0 Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.305342 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xblm9" event={"ID":"3a347ae7-e674-45b7-9e93-ddecdb4a0cad","Type":"ContainerDied","Data":"7c9c784863fc93deefc7043b676843810cf891f254f47ebef7c43bc14d90a471"} Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.305363 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xblm9" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.305378 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xblm9" event={"ID":"3a347ae7-e674-45b7-9e93-ddecdb4a0cad","Type":"ContainerDied","Data":"5bcfb3450714082e0b9405c16f53c1b51bce3a7403df4c6f6d2618359648a59e"} Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.305397 4765 scope.go:117] "RemoveContainer" containerID="7c9c784863fc93deefc7043b676843810cf891f254f47ebef7c43bc14d90a471" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.327490 4765 scope.go:117] "RemoveContainer" containerID="0d021ed48688e0284e6c07a5e221cae1369598a0cd6a1b5c2543c7043ae6be40" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.355764 4765 scope.go:117] "RemoveContainer" containerID="d0e1e19c7af1437c7a08203fd1d541dc998bf10eca142e19a819b647b0923600" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.355922 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xblm9"] Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.367826 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xblm9"] Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.398777 4765 scope.go:117] "RemoveContainer" containerID="7c9c784863fc93deefc7043b676843810cf891f254f47ebef7c43bc14d90a471" Jan 21 14:11:55 crc kubenswrapper[4765]: E0121 14:11:55.399304 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c9c784863fc93deefc7043b676843810cf891f254f47ebef7c43bc14d90a471\": container with ID starting with 7c9c784863fc93deefc7043b676843810cf891f254f47ebef7c43bc14d90a471 not found: ID does not exist" containerID="7c9c784863fc93deefc7043b676843810cf891f254f47ebef7c43bc14d90a471" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.399342 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c9c784863fc93deefc7043b676843810cf891f254f47ebef7c43bc14d90a471"} err="failed to get container status \"7c9c784863fc93deefc7043b676843810cf891f254f47ebef7c43bc14d90a471\": rpc error: code = NotFound desc = could not find container \"7c9c784863fc93deefc7043b676843810cf891f254f47ebef7c43bc14d90a471\": container with ID starting with 7c9c784863fc93deefc7043b676843810cf891f254f47ebef7c43bc14d90a471 not found: ID does not exist" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.399370 4765 scope.go:117] "RemoveContainer" containerID="0d021ed48688e0284e6c07a5e221cae1369598a0cd6a1b5c2543c7043ae6be40" Jan 21 14:11:55 crc kubenswrapper[4765]: E0121 14:11:55.399844 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d021ed48688e0284e6c07a5e221cae1369598a0cd6a1b5c2543c7043ae6be40\": container with ID starting with 0d021ed48688e0284e6c07a5e221cae1369598a0cd6a1b5c2543c7043ae6be40 not found: ID does not exist" containerID="0d021ed48688e0284e6c07a5e221cae1369598a0cd6a1b5c2543c7043ae6be40" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.399891 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d021ed48688e0284e6c07a5e221cae1369598a0cd6a1b5c2543c7043ae6be40"} err="failed to get container status \"0d021ed48688e0284e6c07a5e221cae1369598a0cd6a1b5c2543c7043ae6be40\": rpc error: code = NotFound desc = could not find container \"0d021ed48688e0284e6c07a5e221cae1369598a0cd6a1b5c2543c7043ae6be40\": container with ID starting with 0d021ed48688e0284e6c07a5e221cae1369598a0cd6a1b5c2543c7043ae6be40 not found: ID does not exist" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.399919 4765 scope.go:117] "RemoveContainer" containerID="d0e1e19c7af1437c7a08203fd1d541dc998bf10eca142e19a819b647b0923600" Jan 21 14:11:55 crc kubenswrapper[4765]: E0121 14:11:55.400339 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0e1e19c7af1437c7a08203fd1d541dc998bf10eca142e19a819b647b0923600\": container with ID starting with d0e1e19c7af1437c7a08203fd1d541dc998bf10eca142e19a819b647b0923600 not found: ID does not exist" containerID="d0e1e19c7af1437c7a08203fd1d541dc998bf10eca142e19a819b647b0923600" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.400377 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0e1e19c7af1437c7a08203fd1d541dc998bf10eca142e19a819b647b0923600"} err="failed to get container status \"d0e1e19c7af1437c7a08203fd1d541dc998bf10eca142e19a819b647b0923600\": rpc error: code = NotFound desc = could not find container \"d0e1e19c7af1437c7a08203fd1d541dc998bf10eca142e19a819b647b0923600\": container with ID starting with d0e1e19c7af1437c7a08203fd1d541dc998bf10eca142e19a819b647b0923600 not found: ID does not exist" Jan 21 14:11:55 crc kubenswrapper[4765]: I0121 14:11:55.626692 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a347ae7-e674-45b7-9e93-ddecdb4a0cad" path="/var/lib/kubelet/pods/3a347ae7-e674-45b7-9e93-ddecdb4a0cad/volumes" Jan 21 14:12:14 crc kubenswrapper[4765]: I0121 14:12:14.445845 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:12:14 crc kubenswrapper[4765]: I0121 14:12:14.446485 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:12:44 crc kubenswrapper[4765]: I0121 14:12:44.446180 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:12:44 crc kubenswrapper[4765]: I0121 14:12:44.447314 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:12:44 crc kubenswrapper[4765]: I0121 14:12:44.447380 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 14:12:44 crc kubenswrapper[4765]: I0121 14:12:44.447922 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ca0f5d9400cc961af6025381ed82b34f4e78bfb4f3b3d8562479cb007ef5b63"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:12:44 crc kubenswrapper[4765]: I0121 14:12:44.447976 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://2ca0f5d9400cc961af6025381ed82b34f4e78bfb4f3b3d8562479cb007ef5b63" gracePeriod=600 Jan 21 14:12:44 crc kubenswrapper[4765]: I0121 14:12:44.785274 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="2ca0f5d9400cc961af6025381ed82b34f4e78bfb4f3b3d8562479cb007ef5b63" exitCode=0 Jan 21 14:12:44 crc kubenswrapper[4765]: I0121 14:12:44.785417 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"2ca0f5d9400cc961af6025381ed82b34f4e78bfb4f3b3d8562479cb007ef5b63"} Jan 21 14:12:44 crc kubenswrapper[4765]: I0121 14:12:44.785616 4765 scope.go:117] "RemoveContainer" containerID="5a7fbca33d0c185ed790bb52041a38f56846a307ef3a64aaf7b19649ab21cd60" Jan 21 14:12:45 crc kubenswrapper[4765]: I0121 14:12:45.796827 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435"} Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.021498 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cttk8"] Jan 21 14:13:22 crc kubenswrapper[4765]: E0121 14:13:22.022627 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a347ae7-e674-45b7-9e93-ddecdb4a0cad" containerName="extract-content" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.022646 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a347ae7-e674-45b7-9e93-ddecdb4a0cad" containerName="extract-content" Jan 21 14:13:22 crc kubenswrapper[4765]: E0121 14:13:22.022684 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a347ae7-e674-45b7-9e93-ddecdb4a0cad" containerName="extract-utilities" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.022694 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a347ae7-e674-45b7-9e93-ddecdb4a0cad" containerName="extract-utilities" Jan 21 14:13:22 crc kubenswrapper[4765]: E0121 14:13:22.022705 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a347ae7-e674-45b7-9e93-ddecdb4a0cad" containerName="registry-server" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.022712 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a347ae7-e674-45b7-9e93-ddecdb4a0cad" containerName="registry-server" Jan 21 14:13:22 crc kubenswrapper[4765]: E0121 14:13:22.022733 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2dc91b5-0f41-4899-90c9-e0dcab80e4d8" containerName="copy" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.022742 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2dc91b5-0f41-4899-90c9-e0dcab80e4d8" containerName="copy" Jan 21 14:13:22 crc kubenswrapper[4765]: E0121 14:13:22.022759 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2dc91b5-0f41-4899-90c9-e0dcab80e4d8" containerName="gather" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.022767 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2dc91b5-0f41-4899-90c9-e0dcab80e4d8" containerName="gather" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.023003 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a347ae7-e674-45b7-9e93-ddecdb4a0cad" containerName="registry-server" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.023024 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2dc91b5-0f41-4899-90c9-e0dcab80e4d8" containerName="copy" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.023041 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2dc91b5-0f41-4899-90c9-e0dcab80e4d8" containerName="gather" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.026386 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.040823 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cttk8"] Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.134031 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxmnc\" (UniqueName: \"kubernetes.io/projected/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-kube-api-access-wxmnc\") pod \"certified-operators-cttk8\" (UID: \"bb092473-fc55-4bf7-9ea0-f8f7efa715e3\") " pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.134116 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-catalog-content\") pod \"certified-operators-cttk8\" (UID: \"bb092473-fc55-4bf7-9ea0-f8f7efa715e3\") " pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.134175 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-utilities\") pod \"certified-operators-cttk8\" (UID: \"bb092473-fc55-4bf7-9ea0-f8f7efa715e3\") " pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.236422 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxmnc\" (UniqueName: \"kubernetes.io/projected/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-kube-api-access-wxmnc\") pod \"certified-operators-cttk8\" (UID: \"bb092473-fc55-4bf7-9ea0-f8f7efa715e3\") " pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.236482 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-catalog-content\") pod \"certified-operators-cttk8\" (UID: \"bb092473-fc55-4bf7-9ea0-f8f7efa715e3\") " pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.236521 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-utilities\") pod \"certified-operators-cttk8\" (UID: \"bb092473-fc55-4bf7-9ea0-f8f7efa715e3\") " pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.237045 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-utilities\") pod \"certified-operators-cttk8\" (UID: \"bb092473-fc55-4bf7-9ea0-f8f7efa715e3\") " pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.237351 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-catalog-content\") pod \"certified-operators-cttk8\" (UID: \"bb092473-fc55-4bf7-9ea0-f8f7efa715e3\") " pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.262528 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxmnc\" (UniqueName: \"kubernetes.io/projected/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-kube-api-access-wxmnc\") pod \"certified-operators-cttk8\" (UID: \"bb092473-fc55-4bf7-9ea0-f8f7efa715e3\") " pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:22 crc kubenswrapper[4765]: I0121 14:13:22.349612 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:23 crc kubenswrapper[4765]: I0121 14:13:23.117276 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cttk8"] Jan 21 14:13:23 crc kubenswrapper[4765]: W0121 14:13:23.135969 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb092473_fc55_4bf7_9ea0_f8f7efa715e3.slice/crio-b8177dca94fbd62c7d98f0498a9e4b7f0eaf8afd66874d1c64417f2c5f093de5 WatchSource:0}: Error finding container b8177dca94fbd62c7d98f0498a9e4b7f0eaf8afd66874d1c64417f2c5f093de5: Status 404 returned error can't find the container with id b8177dca94fbd62c7d98f0498a9e4b7f0eaf8afd66874d1c64417f2c5f093de5 Jan 21 14:13:24 crc kubenswrapper[4765]: I0121 14:13:24.138430 4765 generic.go:334] "Generic (PLEG): container finished" podID="bb092473-fc55-4bf7-9ea0-f8f7efa715e3" containerID="b0e9765e2f8e7c95508491424d2f35a0e0dd7696a7c36f2f8e4e8ad306f10f3b" exitCode=0 Jan 21 14:13:24 crc kubenswrapper[4765]: I0121 14:13:24.138731 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cttk8" event={"ID":"bb092473-fc55-4bf7-9ea0-f8f7efa715e3","Type":"ContainerDied","Data":"b0e9765e2f8e7c95508491424d2f35a0e0dd7696a7c36f2f8e4e8ad306f10f3b"} Jan 21 14:13:24 crc kubenswrapper[4765]: I0121 14:13:24.138765 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cttk8" event={"ID":"bb092473-fc55-4bf7-9ea0-f8f7efa715e3","Type":"ContainerStarted","Data":"b8177dca94fbd62c7d98f0498a9e4b7f0eaf8afd66874d1c64417f2c5f093de5"} Jan 21 14:13:29 crc kubenswrapper[4765]: I0121 14:13:29.184726 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cttk8" event={"ID":"bb092473-fc55-4bf7-9ea0-f8f7efa715e3","Type":"ContainerStarted","Data":"a2ac1ee632da3a2633f4fb79d754b761785952a072904a7aa307bc3fa309ac87"} Jan 21 14:13:30 crc kubenswrapper[4765]: I0121 14:13:30.194106 4765 generic.go:334] "Generic (PLEG): container finished" podID="bb092473-fc55-4bf7-9ea0-f8f7efa715e3" containerID="a2ac1ee632da3a2633f4fb79d754b761785952a072904a7aa307bc3fa309ac87" exitCode=0 Jan 21 14:13:30 crc kubenswrapper[4765]: I0121 14:13:30.194151 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cttk8" event={"ID":"bb092473-fc55-4bf7-9ea0-f8f7efa715e3","Type":"ContainerDied","Data":"a2ac1ee632da3a2633f4fb79d754b761785952a072904a7aa307bc3fa309ac87"} Jan 21 14:13:31 crc kubenswrapper[4765]: I0121 14:13:31.206083 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cttk8" event={"ID":"bb092473-fc55-4bf7-9ea0-f8f7efa715e3","Type":"ContainerStarted","Data":"a5eee7c3004be3304e36374136b35bc2df8329040c5bd2a251396a2a8110b4e3"} Jan 21 14:13:31 crc kubenswrapper[4765]: I0121 14:13:31.232323 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cttk8" podStartSLOduration=3.776065482 podStartE2EDuration="10.232304487s" podCreationTimestamp="2026-01-21 14:13:21 +0000 UTC" firstStartedPulling="2026-01-21 14:13:24.141426635 +0000 UTC m=+4265.159152457" lastFinishedPulling="2026-01-21 14:13:30.59766564 +0000 UTC m=+4271.615391462" observedRunningTime="2026-01-21 14:13:31.226230794 +0000 UTC m=+4272.243956626" watchObservedRunningTime="2026-01-21 14:13:31.232304487 +0000 UTC m=+4272.250030309" Jan 21 14:13:32 crc kubenswrapper[4765]: I0121 14:13:32.349975 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:32 crc kubenswrapper[4765]: I0121 14:13:32.350027 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:33 crc kubenswrapper[4765]: I0121 14:13:33.411249 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cttk8" podUID="bb092473-fc55-4bf7-9ea0-f8f7efa715e3" containerName="registry-server" probeResult="failure" output=< Jan 21 14:13:33 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Jan 21 14:13:33 crc kubenswrapper[4765]: > Jan 21 14:13:42 crc kubenswrapper[4765]: I0121 14:13:42.400439 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:42 crc kubenswrapper[4765]: I0121 14:13:42.466812 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:42 crc kubenswrapper[4765]: I0121 14:13:42.642738 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cttk8"] Jan 21 14:13:44 crc kubenswrapper[4765]: I0121 14:13:44.324824 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cttk8" podUID="bb092473-fc55-4bf7-9ea0-f8f7efa715e3" containerName="registry-server" containerID="cri-o://a5eee7c3004be3304e36374136b35bc2df8329040c5bd2a251396a2a8110b4e3" gracePeriod=2 Jan 21 14:13:44 crc kubenswrapper[4765]: I0121 14:13:44.845917 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:44 crc kubenswrapper[4765]: I0121 14:13:44.901231 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxmnc\" (UniqueName: \"kubernetes.io/projected/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-kube-api-access-wxmnc\") pod \"bb092473-fc55-4bf7-9ea0-f8f7efa715e3\" (UID: \"bb092473-fc55-4bf7-9ea0-f8f7efa715e3\") " Jan 21 14:13:44 crc kubenswrapper[4765]: I0121 14:13:44.901311 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-catalog-content\") pod \"bb092473-fc55-4bf7-9ea0-f8f7efa715e3\" (UID: \"bb092473-fc55-4bf7-9ea0-f8f7efa715e3\") " Jan 21 14:13:44 crc kubenswrapper[4765]: I0121 14:13:44.901387 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-utilities\") pod \"bb092473-fc55-4bf7-9ea0-f8f7efa715e3\" (UID: \"bb092473-fc55-4bf7-9ea0-f8f7efa715e3\") " Jan 21 14:13:44 crc kubenswrapper[4765]: I0121 14:13:44.902373 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-utilities" (OuterVolumeSpecName: "utilities") pod "bb092473-fc55-4bf7-9ea0-f8f7efa715e3" (UID: "bb092473-fc55-4bf7-9ea0-f8f7efa715e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:13:44 crc kubenswrapper[4765]: I0121 14:13:44.910645 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-kube-api-access-wxmnc" (OuterVolumeSpecName: "kube-api-access-wxmnc") pod "bb092473-fc55-4bf7-9ea0-f8f7efa715e3" (UID: "bb092473-fc55-4bf7-9ea0-f8f7efa715e3"). InnerVolumeSpecName "kube-api-access-wxmnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:13:44 crc kubenswrapper[4765]: I0121 14:13:44.951114 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb092473-fc55-4bf7-9ea0-f8f7efa715e3" (UID: "bb092473-fc55-4bf7-9ea0-f8f7efa715e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.005368 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxmnc\" (UniqueName: \"kubernetes.io/projected/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-kube-api-access-wxmnc\") on node \"crc\" DevicePath \"\"" Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.007448 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.007590 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb092473-fc55-4bf7-9ea0-f8f7efa715e3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.358485 4765 generic.go:334] "Generic (PLEG): container finished" podID="bb092473-fc55-4bf7-9ea0-f8f7efa715e3" containerID="a5eee7c3004be3304e36374136b35bc2df8329040c5bd2a251396a2a8110b4e3" exitCode=0 Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.358661 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cttk8" Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.360250 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cttk8" event={"ID":"bb092473-fc55-4bf7-9ea0-f8f7efa715e3","Type":"ContainerDied","Data":"a5eee7c3004be3304e36374136b35bc2df8329040c5bd2a251396a2a8110b4e3"} Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.360405 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cttk8" event={"ID":"bb092473-fc55-4bf7-9ea0-f8f7efa715e3","Type":"ContainerDied","Data":"b8177dca94fbd62c7d98f0498a9e4b7f0eaf8afd66874d1c64417f2c5f093de5"} Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.360485 4765 scope.go:117] "RemoveContainer" containerID="a5eee7c3004be3304e36374136b35bc2df8329040c5bd2a251396a2a8110b4e3" Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.390502 4765 scope.go:117] "RemoveContainer" containerID="a2ac1ee632da3a2633f4fb79d754b761785952a072904a7aa307bc3fa309ac87" Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.411900 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cttk8"] Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.424255 4765 scope.go:117] "RemoveContainer" containerID="b0e9765e2f8e7c95508491424d2f35a0e0dd7696a7c36f2f8e4e8ad306f10f3b" Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.429680 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cttk8"] Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.467003 4765 scope.go:117] "RemoveContainer" containerID="a5eee7c3004be3304e36374136b35bc2df8329040c5bd2a251396a2a8110b4e3" Jan 21 14:13:45 crc kubenswrapper[4765]: E0121 14:13:45.468049 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5eee7c3004be3304e36374136b35bc2df8329040c5bd2a251396a2a8110b4e3\": container with ID starting with a5eee7c3004be3304e36374136b35bc2df8329040c5bd2a251396a2a8110b4e3 not found: ID does not exist" containerID="a5eee7c3004be3304e36374136b35bc2df8329040c5bd2a251396a2a8110b4e3" Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.468115 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5eee7c3004be3304e36374136b35bc2df8329040c5bd2a251396a2a8110b4e3"} err="failed to get container status \"a5eee7c3004be3304e36374136b35bc2df8329040c5bd2a251396a2a8110b4e3\": rpc error: code = NotFound desc = could not find container \"a5eee7c3004be3304e36374136b35bc2df8329040c5bd2a251396a2a8110b4e3\": container with ID starting with a5eee7c3004be3304e36374136b35bc2df8329040c5bd2a251396a2a8110b4e3 not found: ID does not exist" Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.468144 4765 scope.go:117] "RemoveContainer" containerID="a2ac1ee632da3a2633f4fb79d754b761785952a072904a7aa307bc3fa309ac87" Jan 21 14:13:45 crc kubenswrapper[4765]: E0121 14:13:45.468628 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2ac1ee632da3a2633f4fb79d754b761785952a072904a7aa307bc3fa309ac87\": container with ID starting with a2ac1ee632da3a2633f4fb79d754b761785952a072904a7aa307bc3fa309ac87 not found: ID does not exist" containerID="a2ac1ee632da3a2633f4fb79d754b761785952a072904a7aa307bc3fa309ac87" Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.468678 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2ac1ee632da3a2633f4fb79d754b761785952a072904a7aa307bc3fa309ac87"} err="failed to get container status \"a2ac1ee632da3a2633f4fb79d754b761785952a072904a7aa307bc3fa309ac87\": rpc error: code = NotFound desc = could not find container \"a2ac1ee632da3a2633f4fb79d754b761785952a072904a7aa307bc3fa309ac87\": container with ID starting with a2ac1ee632da3a2633f4fb79d754b761785952a072904a7aa307bc3fa309ac87 not found: ID does not exist" Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.468712 4765 scope.go:117] "RemoveContainer" containerID="b0e9765e2f8e7c95508491424d2f35a0e0dd7696a7c36f2f8e4e8ad306f10f3b" Jan 21 14:13:45 crc kubenswrapper[4765]: E0121 14:13:45.469072 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0e9765e2f8e7c95508491424d2f35a0e0dd7696a7c36f2f8e4e8ad306f10f3b\": container with ID starting with b0e9765e2f8e7c95508491424d2f35a0e0dd7696a7c36f2f8e4e8ad306f10f3b not found: ID does not exist" containerID="b0e9765e2f8e7c95508491424d2f35a0e0dd7696a7c36f2f8e4e8ad306f10f3b" Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.469103 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0e9765e2f8e7c95508491424d2f35a0e0dd7696a7c36f2f8e4e8ad306f10f3b"} err="failed to get container status \"b0e9765e2f8e7c95508491424d2f35a0e0dd7696a7c36f2f8e4e8ad306f10f3b\": rpc error: code = NotFound desc = could not find container \"b0e9765e2f8e7c95508491424d2f35a0e0dd7696a7c36f2f8e4e8ad306f10f3b\": container with ID starting with b0e9765e2f8e7c95508491424d2f35a0e0dd7696a7c36f2f8e4e8ad306f10f3b not found: ID does not exist" Jan 21 14:13:45 crc kubenswrapper[4765]: I0121 14:13:45.627847 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb092473-fc55-4bf7-9ea0-f8f7efa715e3" path="/var/lib/kubelet/pods/bb092473-fc55-4bf7-9ea0-f8f7efa715e3/volumes" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.132656 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rbvz4/must-gather-hgz6q"] Jan 21 14:13:47 crc kubenswrapper[4765]: E0121 14:13:47.133526 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb092473-fc55-4bf7-9ea0-f8f7efa715e3" containerName="extract-utilities" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.133546 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb092473-fc55-4bf7-9ea0-f8f7efa715e3" containerName="extract-utilities" Jan 21 14:13:47 crc kubenswrapper[4765]: E0121 14:13:47.133563 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb092473-fc55-4bf7-9ea0-f8f7efa715e3" containerName="registry-server" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.133571 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb092473-fc55-4bf7-9ea0-f8f7efa715e3" containerName="registry-server" Jan 21 14:13:47 crc kubenswrapper[4765]: E0121 14:13:47.133601 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb092473-fc55-4bf7-9ea0-f8f7efa715e3" containerName="extract-content" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.133609 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb092473-fc55-4bf7-9ea0-f8f7efa715e3" containerName="extract-content" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.133829 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb092473-fc55-4bf7-9ea0-f8f7efa715e3" containerName="registry-server" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.149994 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/must-gather-hgz6q" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.161120 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rbvz4"/"default-dockercfg-lmccl" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.161475 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rbvz4"/"openshift-service-ca.crt" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.162181 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rbvz4"/"kube-root-ca.crt" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.169885 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rbvz4/must-gather-hgz6q"] Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.297630 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb2tn\" (UniqueName: \"kubernetes.io/projected/56d89599-1283-4f0e-a1da-c2ffeff901d5-kube-api-access-jb2tn\") pod \"must-gather-hgz6q\" (UID: \"56d89599-1283-4f0e-a1da-c2ffeff901d5\") " pod="openshift-must-gather-rbvz4/must-gather-hgz6q" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.297819 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56d89599-1283-4f0e-a1da-c2ffeff901d5-must-gather-output\") pod \"must-gather-hgz6q\" (UID: \"56d89599-1283-4f0e-a1da-c2ffeff901d5\") " pod="openshift-must-gather-rbvz4/must-gather-hgz6q" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.399411 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb2tn\" (UniqueName: \"kubernetes.io/projected/56d89599-1283-4f0e-a1da-c2ffeff901d5-kube-api-access-jb2tn\") pod \"must-gather-hgz6q\" (UID: \"56d89599-1283-4f0e-a1da-c2ffeff901d5\") " pod="openshift-must-gather-rbvz4/must-gather-hgz6q" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.399554 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56d89599-1283-4f0e-a1da-c2ffeff901d5-must-gather-output\") pod \"must-gather-hgz6q\" (UID: \"56d89599-1283-4f0e-a1da-c2ffeff901d5\") " pod="openshift-must-gather-rbvz4/must-gather-hgz6q" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.399939 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56d89599-1283-4f0e-a1da-c2ffeff901d5-must-gather-output\") pod \"must-gather-hgz6q\" (UID: \"56d89599-1283-4f0e-a1da-c2ffeff901d5\") " pod="openshift-must-gather-rbvz4/must-gather-hgz6q" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.439825 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb2tn\" (UniqueName: \"kubernetes.io/projected/56d89599-1283-4f0e-a1da-c2ffeff901d5-kube-api-access-jb2tn\") pod \"must-gather-hgz6q\" (UID: \"56d89599-1283-4f0e-a1da-c2ffeff901d5\") " pod="openshift-must-gather-rbvz4/must-gather-hgz6q" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.480932 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/must-gather-hgz6q" Jan 21 14:13:47 crc kubenswrapper[4765]: I0121 14:13:47.918306 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rbvz4/must-gather-hgz6q"] Jan 21 14:13:48 crc kubenswrapper[4765]: I0121 14:13:48.392673 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvz4/must-gather-hgz6q" event={"ID":"56d89599-1283-4f0e-a1da-c2ffeff901d5","Type":"ContainerStarted","Data":"da3ecdbe9e8c17406dbf0a6f0a0a752e7854e6984bf663901883eba4c8c18a17"} Jan 21 14:13:48 crc kubenswrapper[4765]: I0121 14:13:48.392926 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvz4/must-gather-hgz6q" event={"ID":"56d89599-1283-4f0e-a1da-c2ffeff901d5","Type":"ContainerStarted","Data":"1d69239a498400e8b5a7d5a518edd84d610fba42551b50a21f4fc09c3b70739c"} Jan 21 14:13:49 crc kubenswrapper[4765]: I0121 14:13:49.404377 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvz4/must-gather-hgz6q" event={"ID":"56d89599-1283-4f0e-a1da-c2ffeff901d5","Type":"ContainerStarted","Data":"d1eb4cc9b8433b32f6ed9e5d5b6088df7b023b0dc889f6dbd2da78e6744d42aa"} Jan 21 14:13:49 crc kubenswrapper[4765]: I0121 14:13:49.431012 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rbvz4/must-gather-hgz6q" podStartSLOduration=2.430995746 podStartE2EDuration="2.430995746s" podCreationTimestamp="2026-01-21 14:13:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:13:49.424428559 +0000 UTC m=+4290.442154381" watchObservedRunningTime="2026-01-21 14:13:49.430995746 +0000 UTC m=+4290.448721568" Jan 21 14:13:52 crc kubenswrapper[4765]: I0121 14:13:52.964634 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rbvz4/crc-debug-5vnsj"] Jan 21 14:13:52 crc kubenswrapper[4765]: I0121 14:13:52.967730 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/crc-debug-5vnsj" Jan 21 14:13:52 crc kubenswrapper[4765]: I0121 14:13:52.991587 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e87ad7ef-ba98-4813-930c-2278bfe0d953-host\") pod \"crc-debug-5vnsj\" (UID: \"e87ad7ef-ba98-4813-930c-2278bfe0d953\") " pod="openshift-must-gather-rbvz4/crc-debug-5vnsj" Jan 21 14:13:52 crc kubenswrapper[4765]: I0121 14:13:52.991693 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbj8\" (UniqueName: \"kubernetes.io/projected/e87ad7ef-ba98-4813-930c-2278bfe0d953-kube-api-access-bfbj8\") pod \"crc-debug-5vnsj\" (UID: \"e87ad7ef-ba98-4813-930c-2278bfe0d953\") " pod="openshift-must-gather-rbvz4/crc-debug-5vnsj" Jan 21 14:13:53 crc kubenswrapper[4765]: I0121 14:13:53.093690 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfbj8\" (UniqueName: \"kubernetes.io/projected/e87ad7ef-ba98-4813-930c-2278bfe0d953-kube-api-access-bfbj8\") pod \"crc-debug-5vnsj\" (UID: \"e87ad7ef-ba98-4813-930c-2278bfe0d953\") " pod="openshift-must-gather-rbvz4/crc-debug-5vnsj" Jan 21 14:13:53 crc kubenswrapper[4765]: I0121 14:13:53.093805 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e87ad7ef-ba98-4813-930c-2278bfe0d953-host\") pod \"crc-debug-5vnsj\" (UID: \"e87ad7ef-ba98-4813-930c-2278bfe0d953\") " pod="openshift-must-gather-rbvz4/crc-debug-5vnsj" Jan 21 14:13:53 crc kubenswrapper[4765]: I0121 14:13:53.093897 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e87ad7ef-ba98-4813-930c-2278bfe0d953-host\") pod \"crc-debug-5vnsj\" (UID: \"e87ad7ef-ba98-4813-930c-2278bfe0d953\") " pod="openshift-must-gather-rbvz4/crc-debug-5vnsj" Jan 21 14:13:53 crc kubenswrapper[4765]: I0121 14:13:53.114848 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfbj8\" (UniqueName: \"kubernetes.io/projected/e87ad7ef-ba98-4813-930c-2278bfe0d953-kube-api-access-bfbj8\") pod \"crc-debug-5vnsj\" (UID: \"e87ad7ef-ba98-4813-930c-2278bfe0d953\") " pod="openshift-must-gather-rbvz4/crc-debug-5vnsj" Jan 21 14:13:53 crc kubenswrapper[4765]: I0121 14:13:53.291865 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/crc-debug-5vnsj" Jan 21 14:13:53 crc kubenswrapper[4765]: I0121 14:13:53.438167 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvz4/crc-debug-5vnsj" event={"ID":"e87ad7ef-ba98-4813-930c-2278bfe0d953","Type":"ContainerStarted","Data":"5e3cba00f6a4ecad375041db63f0e7deb17b880b942f5bf12fd96848a3d14df2"} Jan 21 14:13:54 crc kubenswrapper[4765]: I0121 14:13:54.452588 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvz4/crc-debug-5vnsj" event={"ID":"e87ad7ef-ba98-4813-930c-2278bfe0d953","Type":"ContainerStarted","Data":"335d19655d7dc2cfca3aed5c9e2315edf424762a90d23d834f5925fc963dc23e"} Jan 21 14:13:54 crc kubenswrapper[4765]: I0121 14:13:54.480889 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rbvz4/crc-debug-5vnsj" podStartSLOduration=2.480869365 podStartE2EDuration="2.480869365s" podCreationTimestamp="2026-01-21 14:13:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 14:13:54.474999527 +0000 UTC m=+4295.492725349" watchObservedRunningTime="2026-01-21 14:13:54.480869365 +0000 UTC m=+4295.498595187" Jan 21 14:13:56 crc kubenswrapper[4765]: I0121 14:13:56.962761 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6ccc6775fd-qhnc2_4424b63d-0688-473e-80e8-8cd4148911a1/barbican-api-log/0.log" Jan 21 14:13:56 crc kubenswrapper[4765]: I0121 14:13:56.971835 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6ccc6775fd-qhnc2_4424b63d-0688-473e-80e8-8cd4148911a1/barbican-api/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.095740 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7fd49c47b6-4hvtg_8aca8cf8-41b9-44a4-8948-94717695f201/barbican-keystone-listener-log/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.101364 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7fd49c47b6-4hvtg_8aca8cf8-41b9-44a4-8948-94717695f201/barbican-keystone-listener/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.138312 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-667d97cc75-tm9lv_d9390565-b433-4d8e-a112-7f7539cbdc3e/barbican-worker-log/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.148410 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-667d97cc75-tm9lv_d9390565-b433-4d8e-a112-7f7539cbdc3e/barbican-worker/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.199077 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-c9xq2_244e5c68-a93a-44e7-a8fd-d4368ee754bd/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.241698 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e149475f-fb59-4dd4-92f6-d83b29234528/ceilometer-central-agent/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.265906 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e149475f-fb59-4dd4-92f6-d83b29234528/ceilometer-notification-agent/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.271803 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e149475f-fb59-4dd4-92f6-d83b29234528/sg-core/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.288145 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e149475f-fb59-4dd4-92f6-d83b29234528/proxy-httpd/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.305733 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264/cinder-api-log/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.372821 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ab3a1cb8-0dd2-4f2f-9a43-0a5cbba6b264/cinder-api/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.441504 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9d8e00dc-cddb-4ae9-a128-684e2ca459f7/cinder-scheduler/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.481046 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9d8e00dc-cddb-4ae9-a128-684e2ca459f7/probe/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.508974 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-s77bh_9a6275ee-1fe3-407a-b438-a189ac6b3241/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.571932 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vhz72_b30a7ddd-acca-4134-8807-675f980b4a4b/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.645191 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-sh9vb_8b82d059-d861-40e4-8892-ba17220d1b78/dnsmasq-dns/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.651520 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b6dc74c5-sh9vb_8b82d059-d861-40e4-8892-ba17220d1b78/init/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.682407 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-9ldg6_1c7356a7-bab7-4123-9f98-a484d751e8e7/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.693449 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_165f5e89-08b4-465c-acc6-52d76f9c0db0/glance-log/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.714975 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_165f5e89-08b4-465c-acc6-52d76f9c0db0/glance-httpd/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.726284 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_85a4c5bc-cacf-4c49-b285-295c9bfb7b74/glance-log/0.log" Jan 21 14:13:57 crc kubenswrapper[4765]: I0121 14:13:57.751765 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_85a4c5bc-cacf-4c49-b285-295c9bfb7b74/glance-httpd/0.log" Jan 21 14:13:58 crc kubenswrapper[4765]: I0121 14:13:58.144787 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86c57777f6-gqpgv_1241b1f0-34c1-401a-b91f-13b72926cc2c/horizon-log/0.log" Jan 21 14:13:58 crc kubenswrapper[4765]: I0121 14:13:58.270482 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86c57777f6-gqpgv_1241b1f0-34c1-401a-b91f-13b72926cc2c/horizon/2.log" Jan 21 14:13:58 crc kubenswrapper[4765]: I0121 14:13:58.277942 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-86c57777f6-gqpgv_1241b1f0-34c1-401a-b91f-13b72926cc2c/horizon/1.log" Jan 21 14:13:58 crc kubenswrapper[4765]: I0121 14:13:58.310710 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qvlb2_833e4a2d-2bcb-4dfe-90ba-2e239625d5bf/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:13:58 crc kubenswrapper[4765]: I0121 14:13:58.622074 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-4p9nk_d143acd1-ab20-495a-ba80-139132d247e2/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:13:58 crc kubenswrapper[4765]: I0121 14:13:58.859497 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7c5d9867cf-9ffzm_80b18085-cc60-4891-bf22-0c8535624d5b/keystone-api/0.log" Jan 21 14:13:58 crc kubenswrapper[4765]: I0121 14:13:58.870291 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29483401-4rwzk_dac68597-6a74-41ae-987b-e6968ab9931d/keystone-cron/0.log" Jan 21 14:13:58 crc kubenswrapper[4765]: I0121 14:13:58.887344 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f9ebca0c-fe1b-4c55-a5e1-c7e132bf3d0a/kube-state-metrics/0.log" Jan 21 14:13:58 crc kubenswrapper[4765]: I0121 14:13:58.929369 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5dqkl_26624762-8a2d-4273-9f09-73895227b65c/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:14:18 crc kubenswrapper[4765]: I0121 14:14:18.231969 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skh9c_f05e7811-d30d-4f00-b816-a740a454c635/controller/0.log" Jan 21 14:14:18 crc kubenswrapper[4765]: I0121 14:14:18.251106 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skh9c_f05e7811-d30d-4f00-b816-a740a454c635/kube-rbac-proxy/0.log" Jan 21 14:14:18 crc kubenswrapper[4765]: I0121 14:14:18.259857 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-qlhwh_af902f5f-216b-41c7-b1e9-56953151dd65/frr-k8s-webhook-server/0.log" Jan 21 14:14:18 crc kubenswrapper[4765]: I0121 14:14:18.305124 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/controller/0.log" Jan 21 14:14:20 crc kubenswrapper[4765]: I0121 14:14:20.524788 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/frr/0.log" Jan 21 14:14:20 crc kubenswrapper[4765]: I0121 14:14:20.538060 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/reloader/0.log" Jan 21 14:14:20 crc kubenswrapper[4765]: I0121 14:14:20.547085 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/frr-metrics/0.log" Jan 21 14:14:20 crc kubenswrapper[4765]: I0121 14:14:20.553236 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/kube-rbac-proxy/0.log" Jan 21 14:14:20 crc kubenswrapper[4765]: I0121 14:14:20.564617 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/kube-rbac-proxy-frr/0.log" Jan 21 14:14:20 crc kubenswrapper[4765]: I0121 14:14:20.571008 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-frr-files/0.log" Jan 21 14:14:20 crc kubenswrapper[4765]: I0121 14:14:20.577963 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-reloader/0.log" Jan 21 14:14:20 crc kubenswrapper[4765]: I0121 14:14:20.592346 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-metrics/0.log" Jan 21 14:14:20 crc kubenswrapper[4765]: I0121 14:14:20.635181 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c66566bf6-ls8r8_57ed60d8-a38f-47ba-b66d-6e7e557b4399/manager/0.log" Jan 21 14:14:20 crc kubenswrapper[4765]: I0121 14:14:20.647105 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-77844fbdcc-cgv2c_7ba871a2-babc-4cc6-a13b-4fa78e3d0580/webhook-server/0.log" Jan 21 14:14:21 crc kubenswrapper[4765]: I0121 14:14:21.139359 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vswxq_8f59aeb8-b8fe-44bc-9e55-94eba06a676b/speaker/0.log" Jan 21 14:14:21 crc kubenswrapper[4765]: I0121 14:14:21.148133 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vswxq_8f59aeb8-b8fe-44bc-9e55-94eba06a676b/kube-rbac-proxy/0.log" Jan 21 14:14:21 crc kubenswrapper[4765]: I0121 14:14:21.416671 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_02d30b98-43d0-4b3f-82c0-64193524da98/memcached/0.log" Jan 21 14:14:21 crc kubenswrapper[4765]: I0121 14:14:21.509949 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77dcd8ffdf-64j8s_d069b575-51e3-4f93-bff8-a1f0cb141797/neutron-api/0.log" Jan 21 14:14:21 crc kubenswrapper[4765]: I0121 14:14:21.537273 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77dcd8ffdf-64j8s_d069b575-51e3-4f93-bff8-a1f0cb141797/neutron-httpd/0.log" Jan 21 14:14:21 crc kubenswrapper[4765]: I0121 14:14:21.566070 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hmdqs_8c7b4344-a85a-4fb1-ac19-c9cfef6f91e8/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:14:21 crc kubenswrapper[4765]: I0121 14:14:21.671316 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e6ce4b6e-90fe-41ba-a3e8-15fc98276798/nova-api-log/0.log" Jan 21 14:14:22 crc kubenswrapper[4765]: I0121 14:14:22.012430 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e6ce4b6e-90fe-41ba-a3e8-15fc98276798/nova-api-api/0.log" Jan 21 14:14:22 crc kubenswrapper[4765]: I0121 14:14:22.122656 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_79930bf0-36ee-4f2e-8530-0bcdf3c9d998/nova-cell0-conductor-conductor/0.log" Jan 21 14:14:22 crc kubenswrapper[4765]: I0121 14:14:22.243510 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_90f30caf-f36a-421c-b3fc-40d01f40d9e7/nova-cell1-conductor-conductor/0.log" Jan 21 14:14:22 crc kubenswrapper[4765]: I0121 14:14:22.370444 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a9571353-0716-428c-8462-0fa1c4fc8ab3/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 14:14:22 crc kubenswrapper[4765]: I0121 14:14:22.425097 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-pntmx_13a3818b-4be7-40d0-99d2-ae84ab4caceb/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:14:22 crc kubenswrapper[4765]: I0121 14:14:22.477640 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa/nova-metadata-log/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.515583 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3dd59022-7dbc-4c0c-8bd0-8377a2b5d1fa/nova-metadata-metadata/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.634553 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f1a509b9-a443-47bc-b693-4faa2e417ce8/nova-scheduler-scheduler/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.663070 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cf0cab45-7e21-4b1e-a868-b19db9379c99/galera/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.674117 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cf0cab45-7e21-4b1e-a868-b19db9379c99/mysql-bootstrap/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.698851 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_00d8ba34-9c69-4d77-a58a-e8202aa68b31/galera/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.709751 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_00d8ba34-9c69-4d77-a58a-e8202aa68b31/mysql-bootstrap/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.729321 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_344fdbd2-c402-42e4-83d5-7e0bb3b978f6/openstackclient/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.747728 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gkqpl_acf0ca9c-abda-4c3b-98d3-ca3e6189434a/ovn-controller/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.764389 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-zmx6x_2c7cc04a-963e-42e5-82ca-674e3e576a27/openstack-network-exporter/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.785831 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-64shj_0babea53-5832-46a5-a0e6-9fd9823cbbe9/ovsdb-server/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.803328 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-64shj_0babea53-5832-46a5-a0e6-9fd9823cbbe9/ovs-vswitchd/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.814512 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-64shj_0babea53-5832-46a5-a0e6-9fd9823cbbe9/ovsdb-server-init/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.854437 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-cqnjn_db5e6d29-c1aa-4a16-99a9-e2d559619d90/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.864174 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_729e9cbc-22fc-4dea-a03d-5ebcd6c5f183/ovn-northd/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.875846 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_729e9cbc-22fc-4dea-a03d-5ebcd6c5f183/openstack-network-exporter/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.896071 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3/ovsdbserver-nb/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.905316 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7d40d1a6-1e7f-4643-82cf-ec7dfcfbf6d3/openstack-network-exporter/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.935009 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1cf8f51-de39-4833-807f-f5ace97d9c30/ovsdbserver-sb/0.log" Jan 21 14:14:23 crc kubenswrapper[4765]: I0121 14:14:23.942318 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1cf8f51-de39-4833-807f-f5ace97d9c30/openstack-network-exporter/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.010581 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-86cbcc788d-b897j_369424ef-89f9-462a-80aa-6eb36049f6b5/placement-log/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.108014 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-86cbcc788d-b897j_369424ef-89f9-462a-80aa-6eb36049f6b5/placement-api/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.139346 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f302fd12-fe7e-455b-94f0-aafe7ddb95f2/rabbitmq/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.144681 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f302fd12-fe7e-455b-94f0-aafe7ddb95f2/setup-container/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.174912 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_997a77bd-3d32-4db3-a34d-588eb0ea88a3/rabbitmq/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.179886 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_997a77bd-3d32-4db3-a34d-588eb0ea88a3/setup-container/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.197438 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-ll5xb_b4fe3c7f-5af2-4efc-bd46-40f31624c194/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.208516 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7wzhn_2f4e0a44-0962-4477-9526-4df004dd3625/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.224932 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fx2fp_0dd97fb4-c9c8-4ea0-b0c9-69de90bfde12/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.238968 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-2kwpl_4de5f530-bcea-4203-8a79-9e9aebf97e0f/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.256884 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-g6wfz_8ea0edfd-ace0-474e-b868-7ad5bed77cab/ssh-known-hosts-edpm-deployment/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.367229 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-c67b7f46c-vdfh2_dcc230e6-cf6d-4fc2-bea2-9ba2b028716b/proxy-httpd/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.380930 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-c67b7f46c-vdfh2_dcc230e6-cf6d-4fc2-bea2-9ba2b028716b/proxy-server/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.390099 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-j5v45_60abe159-7e5d-4586-9d1b-0050de42edbe/swift-ring-rebalance/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.421856 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/account-server/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.450481 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/account-replicator/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.463589 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/account-auditor/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.476191 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/account-reaper/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.485679 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/container-server/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.514879 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/container-replicator/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.523681 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/container-auditor/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.531517 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/container-updater/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.542063 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/object-server/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.566287 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/object-replicator/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.583807 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/object-auditor/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.592337 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/object-updater/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.610687 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/object-expirer/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.621902 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/rsync/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.639467 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_89b81f15-19f3-4dab-9b2d-fa41b2eab844/swift-recon-cron/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.749254 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-v7wmb_72b52054-c641-4cfb-9e83-f5b6794f77de/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.798013 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_65a8700b-dcb3-42d5-9655-61f2c977e9e2/tempest-tests-tempest-tests-runner/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.806894 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_30dbae35-d4af-4e14-831b-3c17f0e66a0c/test-operator-logs-container/0.log" Jan 21 14:14:24 crc kubenswrapper[4765]: I0121 14:14:24.824699 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-hqrrn_f966a827-0001-4f9f-9600-072b24c50c9e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 14:14:34 crc kubenswrapper[4765]: I0121 14:14:34.881328 4765 generic.go:334] "Generic (PLEG): container finished" podID="e87ad7ef-ba98-4813-930c-2278bfe0d953" containerID="335d19655d7dc2cfca3aed5c9e2315edf424762a90d23d834f5925fc963dc23e" exitCode=0 Jan 21 14:14:34 crc kubenswrapper[4765]: I0121 14:14:34.881353 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvz4/crc-debug-5vnsj" event={"ID":"e87ad7ef-ba98-4813-930c-2278bfe0d953","Type":"ContainerDied","Data":"335d19655d7dc2cfca3aed5c9e2315edf424762a90d23d834f5925fc963dc23e"} Jan 21 14:14:35 crc kubenswrapper[4765]: I0121 14:14:35.992373 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/crc-debug-5vnsj" Jan 21 14:14:36 crc kubenswrapper[4765]: I0121 14:14:36.031931 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rbvz4/crc-debug-5vnsj"] Jan 21 14:14:36 crc kubenswrapper[4765]: I0121 14:14:36.051481 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rbvz4/crc-debug-5vnsj"] Jan 21 14:14:36 crc kubenswrapper[4765]: I0121 14:14:36.105586 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e87ad7ef-ba98-4813-930c-2278bfe0d953-host\") pod \"e87ad7ef-ba98-4813-930c-2278bfe0d953\" (UID: \"e87ad7ef-ba98-4813-930c-2278bfe0d953\") " Jan 21 14:14:36 crc kubenswrapper[4765]: I0121 14:14:36.105700 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e87ad7ef-ba98-4813-930c-2278bfe0d953-host" (OuterVolumeSpecName: "host") pod "e87ad7ef-ba98-4813-930c-2278bfe0d953" (UID: "e87ad7ef-ba98-4813-930c-2278bfe0d953"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:14:36 crc kubenswrapper[4765]: I0121 14:14:36.105781 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfbj8\" (UniqueName: \"kubernetes.io/projected/e87ad7ef-ba98-4813-930c-2278bfe0d953-kube-api-access-bfbj8\") pod \"e87ad7ef-ba98-4813-930c-2278bfe0d953\" (UID: \"e87ad7ef-ba98-4813-930c-2278bfe0d953\") " Jan 21 14:14:36 crc kubenswrapper[4765]: I0121 14:14:36.106332 4765 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e87ad7ef-ba98-4813-930c-2278bfe0d953-host\") on node \"crc\" DevicePath \"\"" Jan 21 14:14:36 crc kubenswrapper[4765]: I0121 14:14:36.112771 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e87ad7ef-ba98-4813-930c-2278bfe0d953-kube-api-access-bfbj8" (OuterVolumeSpecName: "kube-api-access-bfbj8") pod "e87ad7ef-ba98-4813-930c-2278bfe0d953" (UID: "e87ad7ef-ba98-4813-930c-2278bfe0d953"). InnerVolumeSpecName "kube-api-access-bfbj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:14:36 crc kubenswrapper[4765]: I0121 14:14:36.208126 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfbj8\" (UniqueName: \"kubernetes.io/projected/e87ad7ef-ba98-4813-930c-2278bfe0d953-kube-api-access-bfbj8\") on node \"crc\" DevicePath \"\"" Jan 21 14:14:36 crc kubenswrapper[4765]: I0121 14:14:36.931482 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e3cba00f6a4ecad375041db63f0e7deb17b880b942f5bf12fd96848a3d14df2" Jan 21 14:14:36 crc kubenswrapper[4765]: I0121 14:14:36.931621 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/crc-debug-5vnsj" Jan 21 14:14:37 crc kubenswrapper[4765]: I0121 14:14:37.193201 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rbvz4/crc-debug-vvbzj"] Jan 21 14:14:37 crc kubenswrapper[4765]: E0121 14:14:37.193655 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e87ad7ef-ba98-4813-930c-2278bfe0d953" containerName="container-00" Jan 21 14:14:37 crc kubenswrapper[4765]: I0121 14:14:37.193672 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e87ad7ef-ba98-4813-930c-2278bfe0d953" containerName="container-00" Jan 21 14:14:37 crc kubenswrapper[4765]: I0121 14:14:37.193904 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="e87ad7ef-ba98-4813-930c-2278bfe0d953" containerName="container-00" Jan 21 14:14:37 crc kubenswrapper[4765]: I0121 14:14:37.194681 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/crc-debug-vvbzj" Jan 21 14:14:37 crc kubenswrapper[4765]: I0121 14:14:37.333163 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dfd12c46-4421-45e8-85c2-c50c0e613d96-host\") pod \"crc-debug-vvbzj\" (UID: \"dfd12c46-4421-45e8-85c2-c50c0e613d96\") " pod="openshift-must-gather-rbvz4/crc-debug-vvbzj" Jan 21 14:14:37 crc kubenswrapper[4765]: I0121 14:14:37.333691 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7lxq\" (UniqueName: \"kubernetes.io/projected/dfd12c46-4421-45e8-85c2-c50c0e613d96-kube-api-access-w7lxq\") pod \"crc-debug-vvbzj\" (UID: \"dfd12c46-4421-45e8-85c2-c50c0e613d96\") " pod="openshift-must-gather-rbvz4/crc-debug-vvbzj" Jan 21 14:14:37 crc kubenswrapper[4765]: I0121 14:14:37.436053 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dfd12c46-4421-45e8-85c2-c50c0e613d96-host\") pod \"crc-debug-vvbzj\" (UID: \"dfd12c46-4421-45e8-85c2-c50c0e613d96\") " pod="openshift-must-gather-rbvz4/crc-debug-vvbzj" Jan 21 14:14:37 crc kubenswrapper[4765]: I0121 14:14:37.436157 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7lxq\" (UniqueName: \"kubernetes.io/projected/dfd12c46-4421-45e8-85c2-c50c0e613d96-kube-api-access-w7lxq\") pod \"crc-debug-vvbzj\" (UID: \"dfd12c46-4421-45e8-85c2-c50c0e613d96\") " pod="openshift-must-gather-rbvz4/crc-debug-vvbzj" Jan 21 14:14:37 crc kubenswrapper[4765]: I0121 14:14:37.436177 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dfd12c46-4421-45e8-85c2-c50c0e613d96-host\") pod \"crc-debug-vvbzj\" (UID: \"dfd12c46-4421-45e8-85c2-c50c0e613d96\") " pod="openshift-must-gather-rbvz4/crc-debug-vvbzj" Jan 21 14:14:37 crc kubenswrapper[4765]: I0121 14:14:37.463278 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7lxq\" (UniqueName: \"kubernetes.io/projected/dfd12c46-4421-45e8-85c2-c50c0e613d96-kube-api-access-w7lxq\") pod \"crc-debug-vvbzj\" (UID: \"dfd12c46-4421-45e8-85c2-c50c0e613d96\") " pod="openshift-must-gather-rbvz4/crc-debug-vvbzj" Jan 21 14:14:37 crc kubenswrapper[4765]: I0121 14:14:37.516283 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/crc-debug-vvbzj" Jan 21 14:14:37 crc kubenswrapper[4765]: I0121 14:14:37.631701 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e87ad7ef-ba98-4813-930c-2278bfe0d953" path="/var/lib/kubelet/pods/e87ad7ef-ba98-4813-930c-2278bfe0d953/volumes" Jan 21 14:14:37 crc kubenswrapper[4765]: I0121 14:14:37.941635 4765 generic.go:334] "Generic (PLEG): container finished" podID="dfd12c46-4421-45e8-85c2-c50c0e613d96" containerID="43473c38d63aaf6d8df5000837f5b2294e83ab3fd2b8124280c1eb7f568982fe" exitCode=0 Jan 21 14:14:37 crc kubenswrapper[4765]: I0121 14:14:37.941726 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvz4/crc-debug-vvbzj" event={"ID":"dfd12c46-4421-45e8-85c2-c50c0e613d96","Type":"ContainerDied","Data":"43473c38d63aaf6d8df5000837f5b2294e83ab3fd2b8124280c1eb7f568982fe"} Jan 21 14:14:37 crc kubenswrapper[4765]: I0121 14:14:37.941979 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvz4/crc-debug-vvbzj" event={"ID":"dfd12c46-4421-45e8-85c2-c50c0e613d96","Type":"ContainerStarted","Data":"c60a6c815aff476c97ca15ab669eb23d74554e89bb0dd7defec095941b2b46d1"} Jan 21 14:14:38 crc kubenswrapper[4765]: I0121 14:14:38.529705 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rbvz4/crc-debug-vvbzj"] Jan 21 14:14:38 crc kubenswrapper[4765]: I0121 14:14:38.537003 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rbvz4/crc-debug-vvbzj"] Jan 21 14:14:39 crc kubenswrapper[4765]: I0121 14:14:39.073463 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/crc-debug-vvbzj" Jan 21 14:14:39 crc kubenswrapper[4765]: I0121 14:14:39.274666 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dfd12c46-4421-45e8-85c2-c50c0e613d96-host\") pod \"dfd12c46-4421-45e8-85c2-c50c0e613d96\" (UID: \"dfd12c46-4421-45e8-85c2-c50c0e613d96\") " Jan 21 14:14:39 crc kubenswrapper[4765]: I0121 14:14:39.274784 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfd12c46-4421-45e8-85c2-c50c0e613d96-host" (OuterVolumeSpecName: "host") pod "dfd12c46-4421-45e8-85c2-c50c0e613d96" (UID: "dfd12c46-4421-45e8-85c2-c50c0e613d96"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:14:39 crc kubenswrapper[4765]: I0121 14:14:39.274890 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7lxq\" (UniqueName: \"kubernetes.io/projected/dfd12c46-4421-45e8-85c2-c50c0e613d96-kube-api-access-w7lxq\") pod \"dfd12c46-4421-45e8-85c2-c50c0e613d96\" (UID: \"dfd12c46-4421-45e8-85c2-c50c0e613d96\") " Jan 21 14:14:39 crc kubenswrapper[4765]: I0121 14:14:39.276773 4765 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dfd12c46-4421-45e8-85c2-c50c0e613d96-host\") on node \"crc\" DevicePath \"\"" Jan 21 14:14:39 crc kubenswrapper[4765]: I0121 14:14:39.290180 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd12c46-4421-45e8-85c2-c50c0e613d96-kube-api-access-w7lxq" (OuterVolumeSpecName: "kube-api-access-w7lxq") pod "dfd12c46-4421-45e8-85c2-c50c0e613d96" (UID: "dfd12c46-4421-45e8-85c2-c50c0e613d96"). InnerVolumeSpecName "kube-api-access-w7lxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:14:39 crc kubenswrapper[4765]: I0121 14:14:39.378589 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7lxq\" (UniqueName: \"kubernetes.io/projected/dfd12c46-4421-45e8-85c2-c50c0e613d96-kube-api-access-w7lxq\") on node \"crc\" DevicePath \"\"" Jan 21 14:14:39 crc kubenswrapper[4765]: I0121 14:14:39.623560 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfd12c46-4421-45e8-85c2-c50c0e613d96" path="/var/lib/kubelet/pods/dfd12c46-4421-45e8-85c2-c50c0e613d96/volumes" Jan 21 14:14:39 crc kubenswrapper[4765]: I0121 14:14:39.964289 4765 scope.go:117] "RemoveContainer" containerID="43473c38d63aaf6d8df5000837f5b2294e83ab3fd2b8124280c1eb7f568982fe" Jan 21 14:14:39 crc kubenswrapper[4765]: I0121 14:14:39.964784 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/crc-debug-vvbzj" Jan 21 14:14:40 crc kubenswrapper[4765]: I0121 14:14:40.126996 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rbvz4/crc-debug-ljgns"] Jan 21 14:14:40 crc kubenswrapper[4765]: E0121 14:14:40.128197 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd12c46-4421-45e8-85c2-c50c0e613d96" containerName="container-00" Jan 21 14:14:40 crc kubenswrapper[4765]: I0121 14:14:40.128285 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd12c46-4421-45e8-85c2-c50c0e613d96" containerName="container-00" Jan 21 14:14:40 crc kubenswrapper[4765]: I0121 14:14:40.128543 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfd12c46-4421-45e8-85c2-c50c0e613d96" containerName="container-00" Jan 21 14:14:40 crc kubenswrapper[4765]: I0121 14:14:40.129652 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/crc-debug-ljgns" Jan 21 14:14:40 crc kubenswrapper[4765]: I0121 14:14:40.194829 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5-host\") pod \"crc-debug-ljgns\" (UID: \"e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5\") " pod="openshift-must-gather-rbvz4/crc-debug-ljgns" Jan 21 14:14:40 crc kubenswrapper[4765]: I0121 14:14:40.194912 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2d8g\" (UniqueName: \"kubernetes.io/projected/e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5-kube-api-access-z2d8g\") pod \"crc-debug-ljgns\" (UID: \"e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5\") " pod="openshift-must-gather-rbvz4/crc-debug-ljgns" Jan 21 14:14:40 crc kubenswrapper[4765]: I0121 14:14:40.297450 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5-host\") pod \"crc-debug-ljgns\" (UID: \"e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5\") " pod="openshift-must-gather-rbvz4/crc-debug-ljgns" Jan 21 14:14:40 crc kubenswrapper[4765]: I0121 14:14:40.297560 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2d8g\" (UniqueName: \"kubernetes.io/projected/e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5-kube-api-access-z2d8g\") pod \"crc-debug-ljgns\" (UID: \"e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5\") " pod="openshift-must-gather-rbvz4/crc-debug-ljgns" Jan 21 14:14:40 crc kubenswrapper[4765]: I0121 14:14:40.297617 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5-host\") pod \"crc-debug-ljgns\" (UID: \"e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5\") " pod="openshift-must-gather-rbvz4/crc-debug-ljgns" Jan 21 14:14:40 crc kubenswrapper[4765]: I0121 14:14:40.314932 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2d8g\" (UniqueName: \"kubernetes.io/projected/e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5-kube-api-access-z2d8g\") pod \"crc-debug-ljgns\" (UID: \"e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5\") " pod="openshift-must-gather-rbvz4/crc-debug-ljgns" Jan 21 14:14:40 crc kubenswrapper[4765]: I0121 14:14:40.450409 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/crc-debug-ljgns" Jan 21 14:14:40 crc kubenswrapper[4765]: W0121 14:14:40.887436 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2e4c078_7f3e_4e1f_a4d1_077aa5165bb5.slice/crio-19dc36f2797da9b49ae382535cb91af50ef5b28691be2d3a0011030ef5358c25 WatchSource:0}: Error finding container 19dc36f2797da9b49ae382535cb91af50ef5b28691be2d3a0011030ef5358c25: Status 404 returned error can't find the container with id 19dc36f2797da9b49ae382535cb91af50ef5b28691be2d3a0011030ef5358c25 Jan 21 14:14:40 crc kubenswrapper[4765]: I0121 14:14:40.973429 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvz4/crc-debug-ljgns" event={"ID":"e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5","Type":"ContainerStarted","Data":"19dc36f2797da9b49ae382535cb91af50ef5b28691be2d3a0011030ef5358c25"} Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.204413 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/extract/0.log" Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.214426 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/util/0.log" Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.223092 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/pull/0.log" Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.296039 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-848df65fbb-79lv9_448c57b9-0176-42e1-a493-609bc853db01/manager/0.log" Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.335231 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-kq85p_cd5b6743-7a2a-4d03-8adc-952fb87e6f02/manager/0.log" Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.360547 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-dgbtx_079ac5a2-3654-48e8-8bf0-597018fc2ca5/manager/0.log" Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.488272 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-65hfk_4c92e105-ba8b-4828-bc30-857c5431672f/manager/0.log" Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.553749 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-8pvpr_ab7eaa76-7a22-4d3c-85a3-9b643832d707/manager/0.log" Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.592176 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-t42c2_00c36135-159f-43be-be7c-b4f01cf2ace7/manager/0.log" Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.852669 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-c74jr_2962f7bb-1d22-4715-b609-2eb6da1de834/manager/0.log" Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.868933 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rk4x7_2a3c28ee-e170-4592-8291-db76c15675d1/manager/0.log" Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.933362 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-hv2dn_30a8ff01-0173-45a7-9460-9df64146234d/manager/0.log" Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.944374 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rxxvb_c78d0245-2ac0-4576-860f-20c8ad7f7fa3/manager/0.log" Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.979588 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-8kq4g_ecd5f054-6284-485a-8c41-6b2338a5c0f4/manager/0.log" Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.988688 4765 generic.go:334] "Generic (PLEG): container finished" podID="e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5" containerID="fa19f1a82ca5113de7f726ff4b63bf523dcc732167cd1eb7852d6039bc0842fe" exitCode=0 Jan 21 14:14:41 crc kubenswrapper[4765]: I0121 14:14:41.988741 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvz4/crc-debug-ljgns" event={"ID":"e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5","Type":"ContainerDied","Data":"fa19f1a82ca5113de7f726ff4b63bf523dcc732167cd1eb7852d6039bc0842fe"} Jan 21 14:14:42 crc kubenswrapper[4765]: I0121 14:14:42.024206 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-r429h_bdcf568f-99c9-4432-b763-ce16903da409/manager/0.log" Jan 21 14:14:42 crc kubenswrapper[4765]: I0121 14:14:42.033231 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rbvz4/crc-debug-ljgns"] Jan 21 14:14:42 crc kubenswrapper[4765]: I0121 14:14:42.041453 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rbvz4/crc-debug-ljgns"] Jan 21 14:14:42 crc kubenswrapper[4765]: I0121 14:14:42.090066 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-m48zr_953ef395-07f2-4b90-8232-77b94a176094/manager/0.log" Jan 21 14:14:42 crc kubenswrapper[4765]: I0121 14:14:42.103411 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-kh677_882965e2-7eb0-4971-9770-e750a8fe36dc/manager/0.log" Jan 21 14:14:42 crc kubenswrapper[4765]: I0121 14:14:42.118723 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7_246657ac-def3-41ce-bd99-a8d00d97c86b/manager/0.log" Jan 21 14:14:42 crc kubenswrapper[4765]: I0121 14:14:42.286928 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-ccbfb74b7-bm4rb_5db9c466-59ec-47fb-8643-560935c3c92c/operator/0.log" Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.114052 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/crc-debug-ljgns" Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.250377 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5-host\") pod \"e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5\" (UID: \"e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5\") " Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.250487 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2d8g\" (UniqueName: \"kubernetes.io/projected/e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5-kube-api-access-z2d8g\") pod \"e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5\" (UID: \"e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5\") " Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.250744 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5-host" (OuterVolumeSpecName: "host") pod "e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5" (UID: "e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.253125 4765 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5-host\") on node \"crc\" DevicePath \"\"" Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.298124 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5-kube-api-access-z2d8g" (OuterVolumeSpecName: "kube-api-access-z2d8g") pod "e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5" (UID: "e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5"). InnerVolumeSpecName "kube-api-access-z2d8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.354677 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2d8g\" (UniqueName: \"kubernetes.io/projected/e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5-kube-api-access-z2d8g\") on node \"crc\" DevicePath \"\"" Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.460261 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75fcf77584-5dfd7_af5f1c65-c317-4058-9d98-066b866bf83a/manager/0.log" Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.467174 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-p9ml4_d35e26b9-ec61-4be2-b6f6-f40544f4094f/registry-server/0.log" Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.549437 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-kvhff_17d3ffc3-5383-4beb-91d4-db120ddb1c74/manager/0.log" Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.583379 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-97x9c_2bc79302-e5a0-4288-8b2e-ee371eb775a1/manager/0.log" Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.603963 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ql7j4_cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99/operator/0.log" Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.629772 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-gh9vl_c7a6160a-aef5-41af-b1cc-cc2cd97125d7/manager/0.log" Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.629850 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5" path="/var/lib/kubelet/pods/e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5/volumes" Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.684963 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-dhcgg_4c4840ab-a9b6-4243-a2f8-e21eaa84f165/manager/0.log" Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.694028 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-s6zq8_be3fcc93-c1a3-4191-8f75-4d8aa5767593/manager/0.log" Jan 21 14:14:43 crc kubenswrapper[4765]: I0121 14:14:43.706719 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-8r9cq_2d19b122-8cf4-4b4a-8d31-037af2fd65fb/manager/0.log" Jan 21 14:14:44 crc kubenswrapper[4765]: I0121 14:14:44.010302 4765 scope.go:117] "RemoveContainer" containerID="fa19f1a82ca5113de7f726ff4b63bf523dcc732167cd1eb7852d6039bc0842fe" Jan 21 14:14:44 crc kubenswrapper[4765]: I0121 14:14:44.010333 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/crc-debug-ljgns" Jan 21 14:14:44 crc kubenswrapper[4765]: I0121 14:14:44.446584 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:14:44 crc kubenswrapper[4765]: I0121 14:14:44.447597 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:14:48 crc kubenswrapper[4765]: I0121 14:14:48.802629 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-x4zpp_50ea39eb-559e-4298-9133-4d2a5c7890cb/control-plane-machine-set-operator/0.log" Jan 21 14:14:48 crc kubenswrapper[4765]: I0121 14:14:48.823392 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mnwzz_c35257f3-6d8a-4917-a956-3b71a0e54c23/kube-rbac-proxy/0.log" Jan 21 14:14:48 crc kubenswrapper[4765]: I0121 14:14:48.834869 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mnwzz_c35257f3-6d8a-4917-a956-3b71a0e54c23/machine-api-operator/0.log" Jan 21 14:14:54 crc kubenswrapper[4765]: I0121 14:14:54.880179 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-cssjm_30c79cf6-f62c-498b-8c0b-184d3eec661f/cert-manager-controller/0.log" Jan 21 14:14:54 crc kubenswrapper[4765]: I0121 14:14:54.897867 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-7gnzb_861d65e3-bec0-4a97-9ef1-2ff8d0c660fe/cert-manager-cainjector/0.log" Jan 21 14:14:54 crc kubenswrapper[4765]: I0121 14:14:54.908939 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-gznfw_34bef5eb-722e-4dd8-b19a-ae2ec67a4c93/cert-manager-webhook/0.log" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.182503 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx"] Jan 21 14:15:00 crc kubenswrapper[4765]: E0121 14:15:00.183445 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5" containerName="container-00" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.183460 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5" containerName="container-00" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.183658 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e4c078-7f3e-4e1f-a4d1-077aa5165bb5" containerName="container-00" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.184275 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.186848 4765 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.189814 4765 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.195872 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx"] Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.381015 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkmlg\" (UniqueName: \"kubernetes.io/projected/f10a1c40-bd4f-4051-856f-c1a45f08b48e-kube-api-access-rkmlg\") pod \"collect-profiles-29483415-lkvqx\" (UID: \"f10a1c40-bd4f-4051-856f-c1a45f08b48e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.381950 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f10a1c40-bd4f-4051-856f-c1a45f08b48e-secret-volume\") pod \"collect-profiles-29483415-lkvqx\" (UID: \"f10a1c40-bd4f-4051-856f-c1a45f08b48e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.381997 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f10a1c40-bd4f-4051-856f-c1a45f08b48e-config-volume\") pod \"collect-profiles-29483415-lkvqx\" (UID: \"f10a1c40-bd4f-4051-856f-c1a45f08b48e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.483231 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkmlg\" (UniqueName: \"kubernetes.io/projected/f10a1c40-bd4f-4051-856f-c1a45f08b48e-kube-api-access-rkmlg\") pod \"collect-profiles-29483415-lkvqx\" (UID: \"f10a1c40-bd4f-4051-856f-c1a45f08b48e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.483290 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f10a1c40-bd4f-4051-856f-c1a45f08b48e-secret-volume\") pod \"collect-profiles-29483415-lkvqx\" (UID: \"f10a1c40-bd4f-4051-856f-c1a45f08b48e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.483323 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f10a1c40-bd4f-4051-856f-c1a45f08b48e-config-volume\") pod \"collect-profiles-29483415-lkvqx\" (UID: \"f10a1c40-bd4f-4051-856f-c1a45f08b48e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.484337 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f10a1c40-bd4f-4051-856f-c1a45f08b48e-config-volume\") pod \"collect-profiles-29483415-lkvqx\" (UID: \"f10a1c40-bd4f-4051-856f-c1a45f08b48e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.544062 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-kgmtc_79ffb165-f80d-428c-a29e-998f1a119cd7/nmstate-console-plugin/0.log" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.561800 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lbjjz_0da8e178-dbab-4c9c-9e7a-503796386d6f/nmstate-handler/0.log" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.574009 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b2d62_7d962382-89ac-40cc-92b2-0bb0a8cecc4d/nmstate-metrics/0.log" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.594934 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b2d62_7d962382-89ac-40cc-92b2-0bb0a8cecc4d/kube-rbac-proxy/0.log" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.619498 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-fhpqb_26e746e8-47b5-4944-957d-5d43a89b207b/nmstate-operator/0.log" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.631123 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-lmj8n_a847c8c4-dd77-4cd8-9e06-5adb119c43fc/nmstate-webhook/0.log" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.885113 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f10a1c40-bd4f-4051-856f-c1a45f08b48e-secret-volume\") pod \"collect-profiles-29483415-lkvqx\" (UID: \"f10a1c40-bd4f-4051-856f-c1a45f08b48e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" Jan 21 14:15:00 crc kubenswrapper[4765]: I0121 14:15:00.885672 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkmlg\" (UniqueName: \"kubernetes.io/projected/f10a1c40-bd4f-4051-856f-c1a45f08b48e-kube-api-access-rkmlg\") pod \"collect-profiles-29483415-lkvqx\" (UID: \"f10a1c40-bd4f-4051-856f-c1a45f08b48e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" Jan 21 14:15:01 crc kubenswrapper[4765]: I0121 14:15:01.106896 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" Jan 21 14:15:01 crc kubenswrapper[4765]: I0121 14:15:01.610955 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx"] Jan 21 14:15:02 crc kubenswrapper[4765]: I0121 14:15:02.163643 4765 generic.go:334] "Generic (PLEG): container finished" podID="f10a1c40-bd4f-4051-856f-c1a45f08b48e" containerID="caada303d9aa8be1271d978b03f39a22ef821dba360eb27a411d4846204c7f89" exitCode=0 Jan 21 14:15:02 crc kubenswrapper[4765]: I0121 14:15:02.163714 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" event={"ID":"f10a1c40-bd4f-4051-856f-c1a45f08b48e","Type":"ContainerDied","Data":"caada303d9aa8be1271d978b03f39a22ef821dba360eb27a411d4846204c7f89"} Jan 21 14:15:02 crc kubenswrapper[4765]: I0121 14:15:02.164026 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" event={"ID":"f10a1c40-bd4f-4051-856f-c1a45f08b48e","Type":"ContainerStarted","Data":"e2ebab06538e4319735aa36ae5d36eaccbbc586b49c8edab6b5ddad9965d08af"} Jan 21 14:15:03 crc kubenswrapper[4765]: I0121 14:15:03.569591 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" Jan 21 14:15:03 crc kubenswrapper[4765]: I0121 14:15:03.749107 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f10a1c40-bd4f-4051-856f-c1a45f08b48e-secret-volume\") pod \"f10a1c40-bd4f-4051-856f-c1a45f08b48e\" (UID: \"f10a1c40-bd4f-4051-856f-c1a45f08b48e\") " Jan 21 14:15:03 crc kubenswrapper[4765]: I0121 14:15:03.749374 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f10a1c40-bd4f-4051-856f-c1a45f08b48e-config-volume\") pod \"f10a1c40-bd4f-4051-856f-c1a45f08b48e\" (UID: \"f10a1c40-bd4f-4051-856f-c1a45f08b48e\") " Jan 21 14:15:03 crc kubenswrapper[4765]: I0121 14:15:03.749435 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkmlg\" (UniqueName: \"kubernetes.io/projected/f10a1c40-bd4f-4051-856f-c1a45f08b48e-kube-api-access-rkmlg\") pod \"f10a1c40-bd4f-4051-856f-c1a45f08b48e\" (UID: \"f10a1c40-bd4f-4051-856f-c1a45f08b48e\") " Jan 21 14:15:03 crc kubenswrapper[4765]: I0121 14:15:03.749997 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f10a1c40-bd4f-4051-856f-c1a45f08b48e-config-volume" (OuterVolumeSpecName: "config-volume") pod "f10a1c40-bd4f-4051-856f-c1a45f08b48e" (UID: "f10a1c40-bd4f-4051-856f-c1a45f08b48e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 14:15:03 crc kubenswrapper[4765]: I0121 14:15:03.750093 4765 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f10a1c40-bd4f-4051-856f-c1a45f08b48e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:15:03 crc kubenswrapper[4765]: I0121 14:15:03.764721 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10a1c40-bd4f-4051-856f-c1a45f08b48e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f10a1c40-bd4f-4051-856f-c1a45f08b48e" (UID: "f10a1c40-bd4f-4051-856f-c1a45f08b48e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 14:15:03 crc kubenswrapper[4765]: I0121 14:15:03.764792 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10a1c40-bd4f-4051-856f-c1a45f08b48e-kube-api-access-rkmlg" (OuterVolumeSpecName: "kube-api-access-rkmlg") pod "f10a1c40-bd4f-4051-856f-c1a45f08b48e" (UID: "f10a1c40-bd4f-4051-856f-c1a45f08b48e"). InnerVolumeSpecName "kube-api-access-rkmlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:15:03 crc kubenswrapper[4765]: I0121 14:15:03.851564 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkmlg\" (UniqueName: \"kubernetes.io/projected/f10a1c40-bd4f-4051-856f-c1a45f08b48e-kube-api-access-rkmlg\") on node \"crc\" DevicePath \"\"" Jan 21 14:15:03 crc kubenswrapper[4765]: I0121 14:15:03.851603 4765 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f10a1c40-bd4f-4051-856f-c1a45f08b48e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 14:15:04 crc kubenswrapper[4765]: I0121 14:15:04.181239 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" event={"ID":"f10a1c40-bd4f-4051-856f-c1a45f08b48e","Type":"ContainerDied","Data":"e2ebab06538e4319735aa36ae5d36eaccbbc586b49c8edab6b5ddad9965d08af"} Jan 21 14:15:04 crc kubenswrapper[4765]: I0121 14:15:04.181276 4765 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ebab06538e4319735aa36ae5d36eaccbbc586b49c8edab6b5ddad9965d08af" Jan 21 14:15:04 crc kubenswrapper[4765]: I0121 14:15:04.181290 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483415-lkvqx" Jan 21 14:15:04 crc kubenswrapper[4765]: I0121 14:15:04.645494 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6"] Jan 21 14:15:04 crc kubenswrapper[4765]: I0121 14:15:04.654901 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-frrv6"] Jan 21 14:15:05 crc kubenswrapper[4765]: I0121 14:15:05.625895 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2768b478-fd48-4851-8fa3-2b728baccd76" path="/var/lib/kubelet/pods/2768b478-fd48-4851-8fa3-2b728baccd76/volumes" Jan 21 14:15:12 crc kubenswrapper[4765]: I0121 14:15:12.890889 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skh9c_f05e7811-d30d-4f00-b816-a740a454c635/controller/0.log" Jan 21 14:15:12 crc kubenswrapper[4765]: I0121 14:15:12.899039 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skh9c_f05e7811-d30d-4f00-b816-a740a454c635/kube-rbac-proxy/0.log" Jan 21 14:15:12 crc kubenswrapper[4765]: I0121 14:15:12.908155 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-qlhwh_af902f5f-216b-41c7-b1e9-56953151dd65/frr-k8s-webhook-server/0.log" Jan 21 14:15:12 crc kubenswrapper[4765]: I0121 14:15:12.933677 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/controller/0.log" Jan 21 14:15:14 crc kubenswrapper[4765]: I0121 14:15:14.445531 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:15:14 crc kubenswrapper[4765]: I0121 14:15:14.445888 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:15:14 crc kubenswrapper[4765]: I0121 14:15:14.599463 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/frr/0.log" Jan 21 14:15:14 crc kubenswrapper[4765]: I0121 14:15:14.609891 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/reloader/0.log" Jan 21 14:15:14 crc kubenswrapper[4765]: I0121 14:15:14.618427 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/frr-metrics/0.log" Jan 21 14:15:14 crc kubenswrapper[4765]: I0121 14:15:14.626811 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/kube-rbac-proxy/0.log" Jan 21 14:15:14 crc kubenswrapper[4765]: I0121 14:15:14.635105 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/kube-rbac-proxy-frr/0.log" Jan 21 14:15:14 crc kubenswrapper[4765]: I0121 14:15:14.645251 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-frr-files/0.log" Jan 21 14:15:14 crc kubenswrapper[4765]: I0121 14:15:14.657839 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-reloader/0.log" Jan 21 14:15:14 crc kubenswrapper[4765]: I0121 14:15:14.665291 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-metrics/0.log" Jan 21 14:15:14 crc kubenswrapper[4765]: I0121 14:15:14.695001 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c66566bf6-ls8r8_57ed60d8-a38f-47ba-b66d-6e7e557b4399/manager/0.log" Jan 21 14:15:14 crc kubenswrapper[4765]: I0121 14:15:14.704761 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-77844fbdcc-cgv2c_7ba871a2-babc-4cc6-a13b-4fa78e3d0580/webhook-server/0.log" Jan 21 14:15:15 crc kubenswrapper[4765]: I0121 14:15:15.032508 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vswxq_8f59aeb8-b8fe-44bc-9e55-94eba06a676b/speaker/0.log" Jan 21 14:15:15 crc kubenswrapper[4765]: I0121 14:15:15.042433 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vswxq_8f59aeb8-b8fe-44bc-9e55-94eba06a676b/kube-rbac-proxy/0.log" Jan 21 14:15:20 crc kubenswrapper[4765]: I0121 14:15:20.053479 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w_d73b65cf-eba0-49dd-81ad-0fb0431092b8/extract/0.log" Jan 21 14:15:20 crc kubenswrapper[4765]: I0121 14:15:20.066813 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w_d73b65cf-eba0-49dd-81ad-0fb0431092b8/util/0.log" Jan 21 14:15:20 crc kubenswrapper[4765]: I0121 14:15:20.079719 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjq25w_d73b65cf-eba0-49dd-81ad-0fb0431092b8/pull/0.log" Jan 21 14:15:20 crc kubenswrapper[4765]: I0121 14:15:20.090572 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22_68e5ceb6-2341-4976-8588-ecdd97e94b29/extract/0.log" Jan 21 14:15:20 crc kubenswrapper[4765]: I0121 14:15:20.099588 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22_68e5ceb6-2341-4976-8588-ecdd97e94b29/util/0.log" Jan 21 14:15:20 crc kubenswrapper[4765]: I0121 14:15:20.107857 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71327g22_68e5ceb6-2341-4976-8588-ecdd97e94b29/pull/0.log" Jan 21 14:15:20 crc kubenswrapper[4765]: I0121 14:15:20.585568 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l5vxr_1290053f-ebc1-4a58-963a-333751e51945/registry-server/0.log" Jan 21 14:15:20 crc kubenswrapper[4765]: I0121 14:15:20.594009 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l5vxr_1290053f-ebc1-4a58-963a-333751e51945/extract-utilities/0.log" Jan 21 14:15:20 crc kubenswrapper[4765]: I0121 14:15:20.603581 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-l5vxr_1290053f-ebc1-4a58-963a-333751e51945/extract-content/0.log" Jan 21 14:15:21 crc kubenswrapper[4765]: I0121 14:15:21.207981 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fskff_f231dd53-72c3-4d70-879f-d840f959c6c6/registry-server/0.log" Jan 21 14:15:21 crc kubenswrapper[4765]: I0121 14:15:21.222381 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fskff_f231dd53-72c3-4d70-879f-d840f959c6c6/extract-utilities/0.log" Jan 21 14:15:21 crc kubenswrapper[4765]: I0121 14:15:21.232366 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fskff_f231dd53-72c3-4d70-879f-d840f959c6c6/extract-content/0.log" Jan 21 14:15:21 crc kubenswrapper[4765]: I0121 14:15:21.247419 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7bhqm_ed56a4eb-55dc-43eb-86bc-1a7d73bdf3b6/marketplace-operator/0.log" Jan 21 14:15:21 crc kubenswrapper[4765]: I0121 14:15:21.416618 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n2lvd_fd0d39b7-d9c4-4e89-a696-163f5f23eb76/registry-server/0.log" Jan 21 14:15:21 crc kubenswrapper[4765]: I0121 14:15:21.423378 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n2lvd_fd0d39b7-d9c4-4e89-a696-163f5f23eb76/extract-utilities/0.log" Jan 21 14:15:21 crc kubenswrapper[4765]: I0121 14:15:21.431627 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n2lvd_fd0d39b7-d9c4-4e89-a696-163f5f23eb76/extract-content/0.log" Jan 21 14:15:21 crc kubenswrapper[4765]: I0121 14:15:21.916185 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-54n7h_807f8e51-3f5b-4702-be3f-7fe335b54522/registry-server/0.log" Jan 21 14:15:21 crc kubenswrapper[4765]: I0121 14:15:21.921889 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-54n7h_807f8e51-3f5b-4702-be3f-7fe335b54522/extract-utilities/0.log" Jan 21 14:15:21 crc kubenswrapper[4765]: I0121 14:15:21.929595 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-54n7h_807f8e51-3f5b-4702-be3f-7fe335b54522/extract-content/0.log" Jan 21 14:15:39 crc kubenswrapper[4765]: I0121 14:15:39.041554 4765 scope.go:117] "RemoveContainer" containerID="9a070c7c7c1e86b813794e568569290b7bb1f77420a2cbae773c4e1923e0e894" Jan 21 14:15:44 crc kubenswrapper[4765]: I0121 14:15:44.445824 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:15:44 crc kubenswrapper[4765]: I0121 14:15:44.446386 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:15:44 crc kubenswrapper[4765]: I0121 14:15:44.446463 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 14:15:44 crc kubenswrapper[4765]: I0121 14:15:44.447256 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:15:44 crc kubenswrapper[4765]: I0121 14:15:44.447312 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" gracePeriod=600 Jan 21 14:15:44 crc kubenswrapper[4765]: E0121 14:15:44.660842 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:15:44 crc kubenswrapper[4765]: I0121 14:15:44.963567 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" exitCode=0 Jan 21 14:15:44 crc kubenswrapper[4765]: I0121 14:15:44.963621 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435"} Jan 21 14:15:44 crc kubenswrapper[4765]: I0121 14:15:44.963662 4765 scope.go:117] "RemoveContainer" containerID="2ca0f5d9400cc961af6025381ed82b34f4e78bfb4f3b3d8562479cb007ef5b63" Jan 21 14:15:44 crc kubenswrapper[4765]: I0121 14:15:44.964438 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:15:44 crc kubenswrapper[4765]: E0121 14:15:44.964734 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:15:58 crc kubenswrapper[4765]: I0121 14:15:58.614271 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:15:58 crc kubenswrapper[4765]: E0121 14:15:58.615440 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:16:12 crc kubenswrapper[4765]: I0121 14:16:12.613607 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:16:12 crc kubenswrapper[4765]: E0121 14:16:12.614329 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:16:25 crc kubenswrapper[4765]: I0121 14:16:25.613787 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:16:25 crc kubenswrapper[4765]: E0121 14:16:25.614692 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:16:38 crc kubenswrapper[4765]: I0121 14:16:38.614381 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:16:38 crc kubenswrapper[4765]: E0121 14:16:38.615140 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:16:51 crc kubenswrapper[4765]: I0121 14:16:51.293030 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skh9c_f05e7811-d30d-4f00-b816-a740a454c635/controller/0.log" Jan 21 14:16:51 crc kubenswrapper[4765]: I0121 14:16:51.301412 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skh9c_f05e7811-d30d-4f00-b816-a740a454c635/kube-rbac-proxy/0.log" Jan 21 14:16:51 crc kubenswrapper[4765]: I0121 14:16:51.319711 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-qlhwh_af902f5f-216b-41c7-b1e9-56953151dd65/frr-k8s-webhook-server/0.log" Jan 21 14:16:51 crc kubenswrapper[4765]: I0121 14:16:51.336504 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/controller/0.log" Jan 21 14:16:51 crc kubenswrapper[4765]: I0121 14:16:51.728933 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-cssjm_30c79cf6-f62c-498b-8c0b-184d3eec661f/cert-manager-controller/0.log" Jan 21 14:16:51 crc kubenswrapper[4765]: I0121 14:16:51.746638 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-7gnzb_861d65e3-bec0-4a97-9ef1-2ff8d0c660fe/cert-manager-cainjector/0.log" Jan 21 14:16:51 crc kubenswrapper[4765]: I0121 14:16:51.760553 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-gznfw_34bef5eb-722e-4dd8-b19a-ae2ec67a4c93/cert-manager-webhook/0.log" Jan 21 14:16:52 crc kubenswrapper[4765]: I0121 14:16:52.871403 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/frr/0.log" Jan 21 14:16:52 crc kubenswrapper[4765]: I0121 14:16:52.880343 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/reloader/0.log" Jan 21 14:16:52 crc kubenswrapper[4765]: I0121 14:16:52.892179 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/frr-metrics/0.log" Jan 21 14:16:52 crc kubenswrapper[4765]: I0121 14:16:52.902463 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/kube-rbac-proxy/0.log" Jan 21 14:16:52 crc kubenswrapper[4765]: I0121 14:16:52.913075 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/kube-rbac-proxy-frr/0.log" Jan 21 14:16:52 crc kubenswrapper[4765]: I0121 14:16:52.921894 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-frr-files/0.log" Jan 21 14:16:52 crc kubenswrapper[4765]: I0121 14:16:52.932419 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-reloader/0.log" Jan 21 14:16:52 crc kubenswrapper[4765]: I0121 14:16:52.941484 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zcjrs_9120122b-7a7d-4bb6-bf58-29b63c9e20bf/cp-metrics/0.log" Jan 21 14:16:52 crc kubenswrapper[4765]: I0121 14:16:52.965237 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c66566bf6-ls8r8_57ed60d8-a38f-47ba-b66d-6e7e557b4399/manager/0.log" Jan 21 14:16:52 crc kubenswrapper[4765]: I0121 14:16:52.975472 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-77844fbdcc-cgv2c_7ba871a2-babc-4cc6-a13b-4fa78e3d0580/webhook-server/0.log" Jan 21 14:16:53 crc kubenswrapper[4765]: I0121 14:16:53.331762 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vswxq_8f59aeb8-b8fe-44bc-9e55-94eba06a676b/speaker/0.log" Jan 21 14:16:53 crc kubenswrapper[4765]: I0121 14:16:53.340788 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vswxq_8f59aeb8-b8fe-44bc-9e55-94eba06a676b/kube-rbac-proxy/0.log" Jan 21 14:16:53 crc kubenswrapper[4765]: I0121 14:16:53.619690 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:16:53 crc kubenswrapper[4765]: E0121 14:16:53.620000 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:16:53 crc kubenswrapper[4765]: I0121 14:16:53.943371 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/extract/0.log" Jan 21 14:16:53 crc kubenswrapper[4765]: I0121 14:16:53.953200 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/util/0.log" Jan 21 14:16:53 crc kubenswrapper[4765]: I0121 14:16:53.965627 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/pull/0.log" Jan 21 14:16:54 crc kubenswrapper[4765]: I0121 14:16:54.044001 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-848df65fbb-79lv9_448c57b9-0176-42e1-a493-609bc853db01/manager/0.log" Jan 21 14:16:54 crc kubenswrapper[4765]: I0121 14:16:54.093417 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-kq85p_cd5b6743-7a2a-4d03-8adc-952fb87e6f02/manager/0.log" Jan 21 14:16:54 crc kubenswrapper[4765]: I0121 14:16:54.110092 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-dgbtx_079ac5a2-3654-48e8-8bf0-597018fc2ca5/manager/0.log" Jan 21 14:16:54 crc kubenswrapper[4765]: I0121 14:16:54.210440 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-65hfk_4c92e105-ba8b-4828-bc30-857c5431672f/manager/0.log" Jan 21 14:16:54 crc kubenswrapper[4765]: I0121 14:16:54.221418 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-8pvpr_ab7eaa76-7a22-4d3c-85a3-9b643832d707/manager/0.log" Jan 21 14:16:54 crc kubenswrapper[4765]: I0121 14:16:54.256820 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-t42c2_00c36135-159f-43be-be7c-b4f01cf2ace7/manager/0.log" Jan 21 14:16:54 crc kubenswrapper[4765]: I0121 14:16:54.537412 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-c74jr_2962f7bb-1d22-4715-b609-2eb6da1de834/manager/0.log" Jan 21 14:16:54 crc kubenswrapper[4765]: I0121 14:16:54.567457 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rk4x7_2a3c28ee-e170-4592-8291-db76c15675d1/manager/0.log" Jan 21 14:16:54 crc kubenswrapper[4765]: I0121 14:16:54.688162 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-hv2dn_30a8ff01-0173-45a7-9460-9df64146234d/manager/0.log" Jan 21 14:16:54 crc kubenswrapper[4765]: I0121 14:16:54.698901 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rxxvb_c78d0245-2ac0-4576-860f-20c8ad7f7fa3/manager/0.log" Jan 21 14:16:54 crc kubenswrapper[4765]: I0121 14:16:54.737546 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-8kq4g_ecd5f054-6284-485a-8c41-6b2338a5c0f4/manager/0.log" Jan 21 14:16:54 crc kubenswrapper[4765]: I0121 14:16:54.785570 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-r429h_bdcf568f-99c9-4432-b763-ce16903da409/manager/0.log" Jan 21 14:16:54 crc kubenswrapper[4765]: I0121 14:16:54.858733 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-m48zr_953ef395-07f2-4b90-8232-77b94a176094/manager/0.log" Jan 21 14:16:54 crc kubenswrapper[4765]: I0121 14:16:54.881809 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-kh677_882965e2-7eb0-4971-9770-e750a8fe36dc/manager/0.log" Jan 21 14:16:54 crc kubenswrapper[4765]: I0121 14:16:54.913800 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7_246657ac-def3-41ce-bd99-a8d00d97c86b/manager/0.log" Jan 21 14:16:55 crc kubenswrapper[4765]: I0121 14:16:55.049751 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-ccbfb74b7-bm4rb_5db9c466-59ec-47fb-8643-560935c3c92c/operator/0.log" Jan 21 14:16:55 crc kubenswrapper[4765]: I0121 14:16:55.134222 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-cssjm_30c79cf6-f62c-498b-8c0b-184d3eec661f/cert-manager-controller/0.log" Jan 21 14:16:55 crc kubenswrapper[4765]: I0121 14:16:55.163643 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-7gnzb_861d65e3-bec0-4a97-9ef1-2ff8d0c660fe/cert-manager-cainjector/0.log" Jan 21 14:16:55 crc kubenswrapper[4765]: I0121 14:16:55.190411 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-gznfw_34bef5eb-722e-4dd8-b19a-ae2ec67a4c93/cert-manager-webhook/0.log" Jan 21 14:16:56 crc kubenswrapper[4765]: I0121 14:16:56.238796 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75fcf77584-5dfd7_af5f1c65-c317-4058-9d98-066b866bf83a/manager/0.log" Jan 21 14:16:56 crc kubenswrapper[4765]: I0121 14:16:56.273032 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-p9ml4_d35e26b9-ec61-4be2-b6f6-f40544f4094f/registry-server/0.log" Jan 21 14:16:56 crc kubenswrapper[4765]: I0121 14:16:56.333825 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-kvhff_17d3ffc3-5383-4beb-91d4-db120ddb1c74/manager/0.log" Jan 21 14:16:56 crc kubenswrapper[4765]: I0121 14:16:56.365480 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-97x9c_2bc79302-e5a0-4288-8b2e-ee371eb775a1/manager/0.log" Jan 21 14:16:56 crc kubenswrapper[4765]: I0121 14:16:56.388415 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ql7j4_cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99/operator/0.log" Jan 21 14:16:56 crc kubenswrapper[4765]: I0121 14:16:56.420892 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-gh9vl_c7a6160a-aef5-41af-b1cc-cc2cd97125d7/manager/0.log" Jan 21 14:16:56 crc kubenswrapper[4765]: I0121 14:16:56.494871 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-dhcgg_4c4840ab-a9b6-4243-a2f8-e21eaa84f165/manager/0.log" Jan 21 14:16:56 crc kubenswrapper[4765]: I0121 14:16:56.507984 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-s6zq8_be3fcc93-c1a3-4191-8f75-4d8aa5767593/manager/0.log" Jan 21 14:16:56 crc kubenswrapper[4765]: I0121 14:16:56.516883 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-x4zpp_50ea39eb-559e-4298-9133-4d2a5c7890cb/control-plane-machine-set-operator/0.log" Jan 21 14:16:56 crc kubenswrapper[4765]: I0121 14:16:56.528328 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-8r9cq_2d19b122-8cf4-4b4a-8d31-037af2fd65fb/manager/0.log" Jan 21 14:16:56 crc kubenswrapper[4765]: I0121 14:16:56.549823 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mnwzz_c35257f3-6d8a-4917-a956-3b71a0e54c23/kube-rbac-proxy/0.log" Jan 21 14:16:56 crc kubenswrapper[4765]: I0121 14:16:56.564876 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mnwzz_c35257f3-6d8a-4917-a956-3b71a0e54c23/machine-api-operator/0.log" Jan 21 14:16:57 crc kubenswrapper[4765]: I0121 14:16:57.879640 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/extract/0.log" Jan 21 14:16:57 crc kubenswrapper[4765]: I0121 14:16:57.889874 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/util/0.log" Jan 21 14:16:57 crc kubenswrapper[4765]: I0121 14:16:57.902737 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6adfe8aecf27c92f7303ec90f22a5dd84e8f7d9ae907177de7744235c17ml8m_20b31ee6-0264-4ffb-b43c-abbea443e89e/pull/0.log" Jan 21 14:16:57 crc kubenswrapper[4765]: I0121 14:16:57.988630 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-848df65fbb-79lv9_448c57b9-0176-42e1-a493-609bc853db01/manager/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.027892 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-kq85p_cd5b6743-7a2a-4d03-8adc-952fb87e6f02/manager/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.039547 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-dgbtx_079ac5a2-3654-48e8-8bf0-597018fc2ca5/manager/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.130071 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-65hfk_4c92e105-ba8b-4828-bc30-857c5431672f/manager/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.141756 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-8pvpr_ab7eaa76-7a22-4d3c-85a3-9b643832d707/manager/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.170994 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-t42c2_00c36135-159f-43be-be7c-b4f01cf2ace7/manager/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.428601 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-c74jr_2962f7bb-1d22-4715-b609-2eb6da1de834/manager/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.457065 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-rk4x7_2a3c28ee-e170-4592-8291-db76c15675d1/manager/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.526241 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-hv2dn_30a8ff01-0173-45a7-9460-9df64146234d/manager/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.540919 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rxxvb_c78d0245-2ac0-4576-860f-20c8ad7f7fa3/manager/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.589820 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-8kq4g_ecd5f054-6284-485a-8c41-6b2338a5c0f4/manager/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.655303 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-r429h_bdcf568f-99c9-4432-b763-ce16903da409/manager/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.682842 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-kgmtc_79ffb165-f80d-428c-a29e-998f1a119cd7/nmstate-console-plugin/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.708115 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lbjjz_0da8e178-dbab-4c9c-9e7a-503796386d6f/nmstate-handler/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.721967 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b2d62_7d962382-89ac-40cc-92b2-0bb0a8cecc4d/nmstate-metrics/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.730238 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b2d62_7d962382-89ac-40cc-92b2-0bb0a8cecc4d/kube-rbac-proxy/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.754755 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-fhpqb_26e746e8-47b5-4944-957d-5d43a89b207b/nmstate-operator/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.756838 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-m48zr_953ef395-07f2-4b90-8232-77b94a176094/manager/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.768790 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-lmj8n_a847c8c4-dd77-4cd8-9e06-5adb119c43fc/nmstate-webhook/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.769751 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-kh677_882965e2-7eb0-4971-9770-e750a8fe36dc/manager/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.785670 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854kdgg7_246657ac-def3-41ce-bd99-a8d00d97c86b/manager/0.log" Jan 21 14:16:58 crc kubenswrapper[4765]: I0121 14:16:58.952698 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-ccbfb74b7-bm4rb_5db9c466-59ec-47fb-8643-560935c3c92c/operator/0.log" Jan 21 14:17:00 crc kubenswrapper[4765]: I0121 14:17:00.218713 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75fcf77584-5dfd7_af5f1c65-c317-4058-9d98-066b866bf83a/manager/0.log" Jan 21 14:17:00 crc kubenswrapper[4765]: I0121 14:17:00.231142 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-p9ml4_d35e26b9-ec61-4be2-b6f6-f40544f4094f/registry-server/0.log" Jan 21 14:17:00 crc kubenswrapper[4765]: I0121 14:17:00.304473 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-kvhff_17d3ffc3-5383-4beb-91d4-db120ddb1c74/manager/0.log" Jan 21 14:17:00 crc kubenswrapper[4765]: I0121 14:17:00.334350 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-97x9c_2bc79302-e5a0-4288-8b2e-ee371eb775a1/manager/0.log" Jan 21 14:17:00 crc kubenswrapper[4765]: I0121 14:17:00.356279 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ql7j4_cead9f09-f8d9-4cf3-960e-eb1ba8f1fa99/operator/0.log" Jan 21 14:17:00 crc kubenswrapper[4765]: I0121 14:17:00.391250 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-gh9vl_c7a6160a-aef5-41af-b1cc-cc2cd97125d7/manager/0.log" Jan 21 14:17:00 crc kubenswrapper[4765]: I0121 14:17:00.514919 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-dhcgg_4c4840ab-a9b6-4243-a2f8-e21eaa84f165/manager/0.log" Jan 21 14:17:00 crc kubenswrapper[4765]: I0121 14:17:00.547586 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-s6zq8_be3fcc93-c1a3-4191-8f75-4d8aa5767593/manager/0.log" Jan 21 14:17:00 crc kubenswrapper[4765]: I0121 14:17:00.562041 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-8r9cq_2d19b122-8cf4-4b4a-8d31-037af2fd65fb/manager/0.log" Jan 21 14:17:03 crc kubenswrapper[4765]: I0121 14:17:03.075006 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z68f6_22f3d99e-f58c-4caa-be45-b879c6b614d3/kube-multus-additional-cni-plugins/0.log" Jan 21 14:17:03 crc kubenswrapper[4765]: I0121 14:17:03.084782 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z68f6_22f3d99e-f58c-4caa-be45-b879c6b614d3/egress-router-binary-copy/0.log" Jan 21 14:17:03 crc kubenswrapper[4765]: I0121 14:17:03.094514 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z68f6_22f3d99e-f58c-4caa-be45-b879c6b614d3/cni-plugins/0.log" Jan 21 14:17:03 crc kubenswrapper[4765]: I0121 14:17:03.100668 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z68f6_22f3d99e-f58c-4caa-be45-b879c6b614d3/bond-cni-plugin/0.log" Jan 21 14:17:03 crc kubenswrapper[4765]: I0121 14:17:03.107832 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z68f6_22f3d99e-f58c-4caa-be45-b879c6b614d3/routeoverride-cni/0.log" Jan 21 14:17:03 crc kubenswrapper[4765]: I0121 14:17:03.114965 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z68f6_22f3d99e-f58c-4caa-be45-b879c6b614d3/whereabouts-cni-bincopy/0.log" Jan 21 14:17:03 crc kubenswrapper[4765]: I0121 14:17:03.125543 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z68f6_22f3d99e-f58c-4caa-be45-b879c6b614d3/whereabouts-cni/0.log" Jan 21 14:17:03 crc kubenswrapper[4765]: I0121 14:17:03.160628 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-79kcs_17f0cd0d-b1e3-42d0-abde-21e830e40e5d/multus-admission-controller/0.log" Jan 21 14:17:03 crc kubenswrapper[4765]: I0121 14:17:03.168923 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-79kcs_17f0cd0d-b1e3-42d0-abde-21e830e40e5d/kube-rbac-proxy/0.log" Jan 21 14:17:03 crc kubenswrapper[4765]: I0121 14:17:03.228653 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bplfq_d9b9a5be-6b15-46d2-8715-506efdae8ae7/kube-multus/2.log" Jan 21 14:17:03 crc kubenswrapper[4765]: I0121 14:17:03.333090 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bplfq_d9b9a5be-6b15-46d2-8715-506efdae8ae7/kube-multus/3.log" Jan 21 14:17:03 crc kubenswrapper[4765]: I0121 14:17:03.379770 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-4t7jw_d8dea79f-de5c-4034-9742-c322b723a59c/network-metrics-daemon/0.log" Jan 21 14:17:03 crc kubenswrapper[4765]: I0121 14:17:03.385666 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-4t7jw_d8dea79f-de5c-4034-9742-c322b723a59c/kube-rbac-proxy/0.log" Jan 21 14:17:05 crc kubenswrapper[4765]: I0121 14:17:05.614685 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:17:05 crc kubenswrapper[4765]: E0121 14:17:05.615781 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:17:18 crc kubenswrapper[4765]: I0121 14:17:18.614342 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:17:18 crc kubenswrapper[4765]: E0121 14:17:18.616432 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:17:32 crc kubenswrapper[4765]: I0121 14:17:32.613804 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:17:32 crc kubenswrapper[4765]: E0121 14:17:32.615869 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:17:45 crc kubenswrapper[4765]: I0121 14:17:45.614144 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:17:45 crc kubenswrapper[4765]: E0121 14:17:45.614980 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:17:56 crc kubenswrapper[4765]: I0121 14:17:56.614316 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:17:56 crc kubenswrapper[4765]: E0121 14:17:56.615125 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:18:07 crc kubenswrapper[4765]: I0121 14:18:07.615055 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:18:07 crc kubenswrapper[4765]: E0121 14:18:07.615753 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:18:18 crc kubenswrapper[4765]: I0121 14:18:18.614073 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:18:18 crc kubenswrapper[4765]: E0121 14:18:18.614845 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:18:33 crc kubenswrapper[4765]: I0121 14:18:33.613733 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:18:33 crc kubenswrapper[4765]: E0121 14:18:33.615562 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:18:45 crc kubenswrapper[4765]: I0121 14:18:45.618979 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:18:45 crc kubenswrapper[4765]: E0121 14:18:45.620109 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:18:58 crc kubenswrapper[4765]: I0121 14:18:58.614282 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:18:58 crc kubenswrapper[4765]: E0121 14:18:58.615078 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:19:10 crc kubenswrapper[4765]: I0121 14:19:10.614305 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:19:10 crc kubenswrapper[4765]: E0121 14:19:10.615103 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:19:22 crc kubenswrapper[4765]: I0121 14:19:22.614403 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:19:22 crc kubenswrapper[4765]: E0121 14:19:22.615322 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:19:37 crc kubenswrapper[4765]: I0121 14:19:37.613538 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:19:37 crc kubenswrapper[4765]: E0121 14:19:37.614610 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:19:48 crc kubenswrapper[4765]: I0121 14:19:48.863483 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fvm6x"] Jan 21 14:19:48 crc kubenswrapper[4765]: E0121 14:19:48.864722 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10a1c40-bd4f-4051-856f-c1a45f08b48e" containerName="collect-profiles" Jan 21 14:19:48 crc kubenswrapper[4765]: I0121 14:19:48.864742 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10a1c40-bd4f-4051-856f-c1a45f08b48e" containerName="collect-profiles" Jan 21 14:19:48 crc kubenswrapper[4765]: I0121 14:19:48.864982 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10a1c40-bd4f-4051-856f-c1a45f08b48e" containerName="collect-profiles" Jan 21 14:19:48 crc kubenswrapper[4765]: I0121 14:19:48.866806 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:19:48 crc kubenswrapper[4765]: I0121 14:19:48.903715 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvm6x"] Jan 21 14:19:48 crc kubenswrapper[4765]: I0121 14:19:48.922177 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl2nr\" (UniqueName: \"kubernetes.io/projected/354e0ed7-7acc-4c39-812e-de745edd7e63-kube-api-access-rl2nr\") pod \"redhat-operators-fvm6x\" (UID: \"354e0ed7-7acc-4c39-812e-de745edd7e63\") " pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:19:48 crc kubenswrapper[4765]: I0121 14:19:48.922239 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354e0ed7-7acc-4c39-812e-de745edd7e63-utilities\") pod \"redhat-operators-fvm6x\" (UID: \"354e0ed7-7acc-4c39-812e-de745edd7e63\") " pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:19:48 crc kubenswrapper[4765]: I0121 14:19:48.922395 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354e0ed7-7acc-4c39-812e-de745edd7e63-catalog-content\") pod \"redhat-operators-fvm6x\" (UID: \"354e0ed7-7acc-4c39-812e-de745edd7e63\") " pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:19:49 crc kubenswrapper[4765]: I0121 14:19:49.024366 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354e0ed7-7acc-4c39-812e-de745edd7e63-catalog-content\") pod \"redhat-operators-fvm6x\" (UID: \"354e0ed7-7acc-4c39-812e-de745edd7e63\") " pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:19:49 crc kubenswrapper[4765]: I0121 14:19:49.024477 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl2nr\" (UniqueName: \"kubernetes.io/projected/354e0ed7-7acc-4c39-812e-de745edd7e63-kube-api-access-rl2nr\") pod \"redhat-operators-fvm6x\" (UID: \"354e0ed7-7acc-4c39-812e-de745edd7e63\") " pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:19:49 crc kubenswrapper[4765]: I0121 14:19:49.024514 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354e0ed7-7acc-4c39-812e-de745edd7e63-utilities\") pod \"redhat-operators-fvm6x\" (UID: \"354e0ed7-7acc-4c39-812e-de745edd7e63\") " pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:19:49 crc kubenswrapper[4765]: I0121 14:19:49.025115 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354e0ed7-7acc-4c39-812e-de745edd7e63-utilities\") pod \"redhat-operators-fvm6x\" (UID: \"354e0ed7-7acc-4c39-812e-de745edd7e63\") " pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:19:49 crc kubenswrapper[4765]: I0121 14:19:49.025573 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354e0ed7-7acc-4c39-812e-de745edd7e63-catalog-content\") pod \"redhat-operators-fvm6x\" (UID: \"354e0ed7-7acc-4c39-812e-de745edd7e63\") " pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:19:49 crc kubenswrapper[4765]: I0121 14:19:49.044556 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl2nr\" (UniqueName: \"kubernetes.io/projected/354e0ed7-7acc-4c39-812e-de745edd7e63-kube-api-access-rl2nr\") pod \"redhat-operators-fvm6x\" (UID: \"354e0ed7-7acc-4c39-812e-de745edd7e63\") " pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:19:49 crc kubenswrapper[4765]: I0121 14:19:49.187171 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:19:49 crc kubenswrapper[4765]: I0121 14:19:49.674420 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvm6x"] Jan 21 14:19:50 crc kubenswrapper[4765]: I0121 14:19:50.466039 4765 generic.go:334] "Generic (PLEG): container finished" podID="354e0ed7-7acc-4c39-812e-de745edd7e63" containerID="8758af741833c919a6fada6faebcdbfbefe57edd44bac2d365079d916ec3cbcd" exitCode=0 Jan 21 14:19:50 crc kubenswrapper[4765]: I0121 14:19:50.466171 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvm6x" event={"ID":"354e0ed7-7acc-4c39-812e-de745edd7e63","Type":"ContainerDied","Data":"8758af741833c919a6fada6faebcdbfbefe57edd44bac2d365079d916ec3cbcd"} Jan 21 14:19:50 crc kubenswrapper[4765]: I0121 14:19:50.468345 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvm6x" event={"ID":"354e0ed7-7acc-4c39-812e-de745edd7e63","Type":"ContainerStarted","Data":"33e421ae114413b0d82aa3ed99f5e5831d2fd8a4210949f7b40e9f20234b87ab"} Jan 21 14:19:50 crc kubenswrapper[4765]: I0121 14:19:50.468568 4765 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 14:19:51 crc kubenswrapper[4765]: I0121 14:19:51.474779 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvm6x" event={"ID":"354e0ed7-7acc-4c39-812e-de745edd7e63","Type":"ContainerStarted","Data":"01b35ff50fbcb8afdac1b24be51ecca8b97ab0adb23a3d8ae843142ad415bcb1"} Jan 21 14:19:52 crc kubenswrapper[4765]: I0121 14:19:52.614084 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:19:52 crc kubenswrapper[4765]: E0121 14:19:52.614813 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:19:55 crc kubenswrapper[4765]: I0121 14:19:55.517314 4765 generic.go:334] "Generic (PLEG): container finished" podID="354e0ed7-7acc-4c39-812e-de745edd7e63" containerID="01b35ff50fbcb8afdac1b24be51ecca8b97ab0adb23a3d8ae843142ad415bcb1" exitCode=0 Jan 21 14:19:55 crc kubenswrapper[4765]: I0121 14:19:55.517936 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvm6x" event={"ID":"354e0ed7-7acc-4c39-812e-de745edd7e63","Type":"ContainerDied","Data":"01b35ff50fbcb8afdac1b24be51ecca8b97ab0adb23a3d8ae843142ad415bcb1"} Jan 21 14:19:57 crc kubenswrapper[4765]: I0121 14:19:57.569316 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvm6x" event={"ID":"354e0ed7-7acc-4c39-812e-de745edd7e63","Type":"ContainerStarted","Data":"b92ee5310e395a135b2b92bb09ecbbbbb76fe4199d9b87f7746c9dabcfbb8e9b"} Jan 21 14:19:57 crc kubenswrapper[4765]: I0121 14:19:57.595357 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fvm6x" podStartSLOduration=3.203776701 podStartE2EDuration="9.595339709s" podCreationTimestamp="2026-01-21 14:19:48 +0000 UTC" firstStartedPulling="2026-01-21 14:19:50.467932866 +0000 UTC m=+4651.485658678" lastFinishedPulling="2026-01-21 14:19:56.859495864 +0000 UTC m=+4657.877221686" observedRunningTime="2026-01-21 14:19:57.593610469 +0000 UTC m=+4658.611336291" watchObservedRunningTime="2026-01-21 14:19:57.595339709 +0000 UTC m=+4658.613065531" Jan 21 14:19:59 crc kubenswrapper[4765]: I0121 14:19:59.187478 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:19:59 crc kubenswrapper[4765]: I0121 14:19:59.189155 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:20:00 crc kubenswrapper[4765]: I0121 14:20:00.233681 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fvm6x" podUID="354e0ed7-7acc-4c39-812e-de745edd7e63" containerName="registry-server" probeResult="failure" output=< Jan 21 14:20:00 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Jan 21 14:20:00 crc kubenswrapper[4765]: > Jan 21 14:20:03 crc kubenswrapper[4765]: I0121 14:20:03.616381 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:20:03 crc kubenswrapper[4765]: E0121 14:20:03.617018 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:20:09 crc kubenswrapper[4765]: I0121 14:20:09.840207 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:20:09 crc kubenswrapper[4765]: I0121 14:20:09.890640 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:20:10 crc kubenswrapper[4765]: I0121 14:20:10.077190 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvm6x"] Jan 21 14:20:11 crc kubenswrapper[4765]: I0121 14:20:11.704750 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fvm6x" podUID="354e0ed7-7acc-4c39-812e-de745edd7e63" containerName="registry-server" containerID="cri-o://b92ee5310e395a135b2b92bb09ecbbbbb76fe4199d9b87f7746c9dabcfbb8e9b" gracePeriod=2 Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.138892 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.241942 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl2nr\" (UniqueName: \"kubernetes.io/projected/354e0ed7-7acc-4c39-812e-de745edd7e63-kube-api-access-rl2nr\") pod \"354e0ed7-7acc-4c39-812e-de745edd7e63\" (UID: \"354e0ed7-7acc-4c39-812e-de745edd7e63\") " Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.242014 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354e0ed7-7acc-4c39-812e-de745edd7e63-catalog-content\") pod \"354e0ed7-7acc-4c39-812e-de745edd7e63\" (UID: \"354e0ed7-7acc-4c39-812e-de745edd7e63\") " Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.242118 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354e0ed7-7acc-4c39-812e-de745edd7e63-utilities\") pod \"354e0ed7-7acc-4c39-812e-de745edd7e63\" (UID: \"354e0ed7-7acc-4c39-812e-de745edd7e63\") " Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.243240 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354e0ed7-7acc-4c39-812e-de745edd7e63-utilities" (OuterVolumeSpecName: "utilities") pod "354e0ed7-7acc-4c39-812e-de745edd7e63" (UID: "354e0ed7-7acc-4c39-812e-de745edd7e63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.258372 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354e0ed7-7acc-4c39-812e-de745edd7e63-kube-api-access-rl2nr" (OuterVolumeSpecName: "kube-api-access-rl2nr") pod "354e0ed7-7acc-4c39-812e-de745edd7e63" (UID: "354e0ed7-7acc-4c39-812e-de745edd7e63"). InnerVolumeSpecName "kube-api-access-rl2nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.344996 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/354e0ed7-7acc-4c39-812e-de745edd7e63-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.345062 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl2nr\" (UniqueName: \"kubernetes.io/projected/354e0ed7-7acc-4c39-812e-de745edd7e63-kube-api-access-rl2nr\") on node \"crc\" DevicePath \"\"" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.380391 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354e0ed7-7acc-4c39-812e-de745edd7e63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "354e0ed7-7acc-4c39-812e-de745edd7e63" (UID: "354e0ed7-7acc-4c39-812e-de745edd7e63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.447651 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/354e0ed7-7acc-4c39-812e-de745edd7e63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.721855 4765 generic.go:334] "Generic (PLEG): container finished" podID="354e0ed7-7acc-4c39-812e-de745edd7e63" containerID="b92ee5310e395a135b2b92bb09ecbbbbb76fe4199d9b87f7746c9dabcfbb8e9b" exitCode=0 Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.721914 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvm6x" event={"ID":"354e0ed7-7acc-4c39-812e-de745edd7e63","Type":"ContainerDied","Data":"b92ee5310e395a135b2b92bb09ecbbbbb76fe4199d9b87f7746c9dabcfbb8e9b"} Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.721963 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvm6x" event={"ID":"354e0ed7-7acc-4c39-812e-de745edd7e63","Type":"ContainerDied","Data":"33e421ae114413b0d82aa3ed99f5e5831d2fd8a4210949f7b40e9f20234b87ab"} Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.721990 4765 scope.go:117] "RemoveContainer" containerID="b92ee5310e395a135b2b92bb09ecbbbbb76fe4199d9b87f7746c9dabcfbb8e9b" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.722186 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvm6x" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.778911 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvm6x"] Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.787764 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fvm6x"] Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.803177 4765 scope.go:117] "RemoveContainer" containerID="01b35ff50fbcb8afdac1b24be51ecca8b97ab0adb23a3d8ae843142ad415bcb1" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.849787 4765 scope.go:117] "RemoveContainer" containerID="8758af741833c919a6fada6faebcdbfbefe57edd44bac2d365079d916ec3cbcd" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.880988 4765 scope.go:117] "RemoveContainer" containerID="b92ee5310e395a135b2b92bb09ecbbbbb76fe4199d9b87f7746c9dabcfbb8e9b" Jan 21 14:20:12 crc kubenswrapper[4765]: E0121 14:20:12.881701 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b92ee5310e395a135b2b92bb09ecbbbbb76fe4199d9b87f7746c9dabcfbb8e9b\": container with ID starting with b92ee5310e395a135b2b92bb09ecbbbbb76fe4199d9b87f7746c9dabcfbb8e9b not found: ID does not exist" containerID="b92ee5310e395a135b2b92bb09ecbbbbb76fe4199d9b87f7746c9dabcfbb8e9b" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.881746 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b92ee5310e395a135b2b92bb09ecbbbbb76fe4199d9b87f7746c9dabcfbb8e9b"} err="failed to get container status \"b92ee5310e395a135b2b92bb09ecbbbbb76fe4199d9b87f7746c9dabcfbb8e9b\": rpc error: code = NotFound desc = could not find container \"b92ee5310e395a135b2b92bb09ecbbbbb76fe4199d9b87f7746c9dabcfbb8e9b\": container with ID starting with b92ee5310e395a135b2b92bb09ecbbbbb76fe4199d9b87f7746c9dabcfbb8e9b not found: ID does not exist" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.881772 4765 scope.go:117] "RemoveContainer" containerID="01b35ff50fbcb8afdac1b24be51ecca8b97ab0adb23a3d8ae843142ad415bcb1" Jan 21 14:20:12 crc kubenswrapper[4765]: E0121 14:20:12.882043 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01b35ff50fbcb8afdac1b24be51ecca8b97ab0adb23a3d8ae843142ad415bcb1\": container with ID starting with 01b35ff50fbcb8afdac1b24be51ecca8b97ab0adb23a3d8ae843142ad415bcb1 not found: ID does not exist" containerID="01b35ff50fbcb8afdac1b24be51ecca8b97ab0adb23a3d8ae843142ad415bcb1" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.882066 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b35ff50fbcb8afdac1b24be51ecca8b97ab0adb23a3d8ae843142ad415bcb1"} err="failed to get container status \"01b35ff50fbcb8afdac1b24be51ecca8b97ab0adb23a3d8ae843142ad415bcb1\": rpc error: code = NotFound desc = could not find container \"01b35ff50fbcb8afdac1b24be51ecca8b97ab0adb23a3d8ae843142ad415bcb1\": container with ID starting with 01b35ff50fbcb8afdac1b24be51ecca8b97ab0adb23a3d8ae843142ad415bcb1 not found: ID does not exist" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.882079 4765 scope.go:117] "RemoveContainer" containerID="8758af741833c919a6fada6faebcdbfbefe57edd44bac2d365079d916ec3cbcd" Jan 21 14:20:12 crc kubenswrapper[4765]: E0121 14:20:12.882350 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8758af741833c919a6fada6faebcdbfbefe57edd44bac2d365079d916ec3cbcd\": container with ID starting with 8758af741833c919a6fada6faebcdbfbefe57edd44bac2d365079d916ec3cbcd not found: ID does not exist" containerID="8758af741833c919a6fada6faebcdbfbefe57edd44bac2d365079d916ec3cbcd" Jan 21 14:20:12 crc kubenswrapper[4765]: I0121 14:20:12.882378 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8758af741833c919a6fada6faebcdbfbefe57edd44bac2d365079d916ec3cbcd"} err="failed to get container status \"8758af741833c919a6fada6faebcdbfbefe57edd44bac2d365079d916ec3cbcd\": rpc error: code = NotFound desc = could not find container \"8758af741833c919a6fada6faebcdbfbefe57edd44bac2d365079d916ec3cbcd\": container with ID starting with 8758af741833c919a6fada6faebcdbfbefe57edd44bac2d365079d916ec3cbcd not found: ID does not exist" Jan 21 14:20:13 crc kubenswrapper[4765]: I0121 14:20:13.624362 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354e0ed7-7acc-4c39-812e-de745edd7e63" path="/var/lib/kubelet/pods/354e0ed7-7acc-4c39-812e-de745edd7e63/volumes" Jan 21 14:20:14 crc kubenswrapper[4765]: I0121 14:20:14.614771 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:20:14 crc kubenswrapper[4765]: E0121 14:20:14.615343 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:20:25 crc kubenswrapper[4765]: I0121 14:20:25.613765 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:20:25 crc kubenswrapper[4765]: E0121 14:20:25.614606 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:20:38 crc kubenswrapper[4765]: I0121 14:20:38.616078 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:20:38 crc kubenswrapper[4765]: E0121 14:20:38.617318 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:20:39 crc kubenswrapper[4765]: I0121 14:20:39.226117 4765 scope.go:117] "RemoveContainer" containerID="335d19655d7dc2cfca3aed5c9e2315edf424762a90d23d834f5925fc963dc23e" Jan 21 14:20:50 crc kubenswrapper[4765]: I0121 14:20:50.615399 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:20:51 crc kubenswrapper[4765]: I0121 14:20:51.487099 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"2e4cffd9a21b0db7b2979a75be9cd7daae21d859411255361a9373987f2a69d5"} Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.401605 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tblmj"] Jan 21 14:22:49 crc kubenswrapper[4765]: E0121 14:22:49.402804 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e0ed7-7acc-4c39-812e-de745edd7e63" containerName="extract-content" Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.402821 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e0ed7-7acc-4c39-812e-de745edd7e63" containerName="extract-content" Jan 21 14:22:49 crc kubenswrapper[4765]: E0121 14:22:49.402844 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e0ed7-7acc-4c39-812e-de745edd7e63" containerName="extract-utilities" Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.402850 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e0ed7-7acc-4c39-812e-de745edd7e63" containerName="extract-utilities" Jan 21 14:22:49 crc kubenswrapper[4765]: E0121 14:22:49.402869 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354e0ed7-7acc-4c39-812e-de745edd7e63" containerName="registry-server" Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.402875 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="354e0ed7-7acc-4c39-812e-de745edd7e63" containerName="registry-server" Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.403062 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="354e0ed7-7acc-4c39-812e-de745edd7e63" containerName="registry-server" Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.404424 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.412827 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d12f6e-c08e-48c4-9e9d-d230c7013864-utilities\") pod \"redhat-marketplace-tblmj\" (UID: \"b2d12f6e-c08e-48c4-9e9d-d230c7013864\") " pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.412910 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ttz6\" (UniqueName: \"kubernetes.io/projected/b2d12f6e-c08e-48c4-9e9d-d230c7013864-kube-api-access-4ttz6\") pod \"redhat-marketplace-tblmj\" (UID: \"b2d12f6e-c08e-48c4-9e9d-d230c7013864\") " pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.413000 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d12f6e-c08e-48c4-9e9d-d230c7013864-catalog-content\") pod \"redhat-marketplace-tblmj\" (UID: \"b2d12f6e-c08e-48c4-9e9d-d230c7013864\") " pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.420029 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tblmj"] Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.514505 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d12f6e-c08e-48c4-9e9d-d230c7013864-catalog-content\") pod \"redhat-marketplace-tblmj\" (UID: \"b2d12f6e-c08e-48c4-9e9d-d230c7013864\") " pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.514589 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d12f6e-c08e-48c4-9e9d-d230c7013864-utilities\") pod \"redhat-marketplace-tblmj\" (UID: \"b2d12f6e-c08e-48c4-9e9d-d230c7013864\") " pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.514645 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ttz6\" (UniqueName: \"kubernetes.io/projected/b2d12f6e-c08e-48c4-9e9d-d230c7013864-kube-api-access-4ttz6\") pod \"redhat-marketplace-tblmj\" (UID: \"b2d12f6e-c08e-48c4-9e9d-d230c7013864\") " pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.515036 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d12f6e-c08e-48c4-9e9d-d230c7013864-catalog-content\") pod \"redhat-marketplace-tblmj\" (UID: \"b2d12f6e-c08e-48c4-9e9d-d230c7013864\") " pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.515378 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d12f6e-c08e-48c4-9e9d-d230c7013864-utilities\") pod \"redhat-marketplace-tblmj\" (UID: \"b2d12f6e-c08e-48c4-9e9d-d230c7013864\") " pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.552130 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ttz6\" (UniqueName: \"kubernetes.io/projected/b2d12f6e-c08e-48c4-9e9d-d230c7013864-kube-api-access-4ttz6\") pod \"redhat-marketplace-tblmj\" (UID: \"b2d12f6e-c08e-48c4-9e9d-d230c7013864\") " pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:22:49 crc kubenswrapper[4765]: I0121 14:22:49.737974 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:22:50 crc kubenswrapper[4765]: I0121 14:22:50.304815 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tblmj"] Jan 21 14:22:50 crc kubenswrapper[4765]: I0121 14:22:50.711329 4765 generic.go:334] "Generic (PLEG): container finished" podID="b2d12f6e-c08e-48c4-9e9d-d230c7013864" containerID="df009560927e8415314e57ea42fd58baca7480e27ee724c5984e352840985221" exitCode=0 Jan 21 14:22:50 crc kubenswrapper[4765]: I0121 14:22:50.711394 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tblmj" event={"ID":"b2d12f6e-c08e-48c4-9e9d-d230c7013864","Type":"ContainerDied","Data":"df009560927e8415314e57ea42fd58baca7480e27ee724c5984e352840985221"} Jan 21 14:22:50 crc kubenswrapper[4765]: I0121 14:22:50.711656 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tblmj" event={"ID":"b2d12f6e-c08e-48c4-9e9d-d230c7013864","Type":"ContainerStarted","Data":"bda51e8c7fbfb3ab32ce7646fd99f40cd2b5b6a13c52573d97a67d26ffd6c9df"} Jan 21 14:22:52 crc kubenswrapper[4765]: I0121 14:22:52.749446 4765 generic.go:334] "Generic (PLEG): container finished" podID="b2d12f6e-c08e-48c4-9e9d-d230c7013864" containerID="b1f554c9d97e849826af9a8b3ddf0307259ac06bd6b1fea76b0fb762c7188676" exitCode=0 Jan 21 14:22:52 crc kubenswrapper[4765]: I0121 14:22:52.749542 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tblmj" event={"ID":"b2d12f6e-c08e-48c4-9e9d-d230c7013864","Type":"ContainerDied","Data":"b1f554c9d97e849826af9a8b3ddf0307259ac06bd6b1fea76b0fb762c7188676"} Jan 21 14:22:53 crc kubenswrapper[4765]: I0121 14:22:53.760551 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tblmj" event={"ID":"b2d12f6e-c08e-48c4-9e9d-d230c7013864","Type":"ContainerStarted","Data":"258b3f9efda89897b85b2597edb5683b9b1a8563cbe82b363607ea5f64330f88"} Jan 21 14:22:53 crc kubenswrapper[4765]: I0121 14:22:53.794593 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tblmj" podStartSLOduration=2.358085471 podStartE2EDuration="4.794574013s" podCreationTimestamp="2026-01-21 14:22:49 +0000 UTC" firstStartedPulling="2026-01-21 14:22:50.715259212 +0000 UTC m=+4831.732985034" lastFinishedPulling="2026-01-21 14:22:53.151747754 +0000 UTC m=+4834.169473576" observedRunningTime="2026-01-21 14:22:53.786178706 +0000 UTC m=+4834.803904528" watchObservedRunningTime="2026-01-21 14:22:53.794574013 +0000 UTC m=+4834.812299835" Jan 21 14:22:55 crc kubenswrapper[4765]: I0121 14:22:55.138756 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ss9hp"] Jan 21 14:22:55 crc kubenswrapper[4765]: I0121 14:22:55.140840 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:22:55 crc kubenswrapper[4765]: I0121 14:22:55.173400 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ss9hp"] Jan 21 14:22:55 crc kubenswrapper[4765]: I0121 14:22:55.207862 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fab2097-84aa-4337-8ab1-6559dc30941c-utilities\") pod \"community-operators-ss9hp\" (UID: \"1fab2097-84aa-4337-8ab1-6559dc30941c\") " pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:22:55 crc kubenswrapper[4765]: I0121 14:22:55.207958 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fab2097-84aa-4337-8ab1-6559dc30941c-catalog-content\") pod \"community-operators-ss9hp\" (UID: \"1fab2097-84aa-4337-8ab1-6559dc30941c\") " pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:22:55 crc kubenswrapper[4765]: I0121 14:22:55.208099 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf2b2\" (UniqueName: \"kubernetes.io/projected/1fab2097-84aa-4337-8ab1-6559dc30941c-kube-api-access-wf2b2\") pod \"community-operators-ss9hp\" (UID: \"1fab2097-84aa-4337-8ab1-6559dc30941c\") " pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:22:55 crc kubenswrapper[4765]: I0121 14:22:55.309600 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fab2097-84aa-4337-8ab1-6559dc30941c-catalog-content\") pod \"community-operators-ss9hp\" (UID: \"1fab2097-84aa-4337-8ab1-6559dc30941c\") " pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:22:55 crc kubenswrapper[4765]: I0121 14:22:55.309692 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf2b2\" (UniqueName: \"kubernetes.io/projected/1fab2097-84aa-4337-8ab1-6559dc30941c-kube-api-access-wf2b2\") pod \"community-operators-ss9hp\" (UID: \"1fab2097-84aa-4337-8ab1-6559dc30941c\") " pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:22:55 crc kubenswrapper[4765]: I0121 14:22:55.309776 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fab2097-84aa-4337-8ab1-6559dc30941c-utilities\") pod \"community-operators-ss9hp\" (UID: \"1fab2097-84aa-4337-8ab1-6559dc30941c\") " pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:22:55 crc kubenswrapper[4765]: I0121 14:22:55.310329 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fab2097-84aa-4337-8ab1-6559dc30941c-utilities\") pod \"community-operators-ss9hp\" (UID: \"1fab2097-84aa-4337-8ab1-6559dc30941c\") " pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:22:55 crc kubenswrapper[4765]: I0121 14:22:55.310321 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fab2097-84aa-4337-8ab1-6559dc30941c-catalog-content\") pod \"community-operators-ss9hp\" (UID: \"1fab2097-84aa-4337-8ab1-6559dc30941c\") " pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:22:55 crc kubenswrapper[4765]: I0121 14:22:55.331996 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf2b2\" (UniqueName: \"kubernetes.io/projected/1fab2097-84aa-4337-8ab1-6559dc30941c-kube-api-access-wf2b2\") pod \"community-operators-ss9hp\" (UID: \"1fab2097-84aa-4337-8ab1-6559dc30941c\") " pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:22:55 crc kubenswrapper[4765]: I0121 14:22:55.462712 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:22:56 crc kubenswrapper[4765]: I0121 14:22:56.010725 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ss9hp"] Jan 21 14:22:56 crc kubenswrapper[4765]: W0121 14:22:56.018179 4765 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fab2097_84aa_4337_8ab1_6559dc30941c.slice/crio-4cc3bc033bc839a77de322761c882d25bd8322dcef4348540820abd1cd76038c WatchSource:0}: Error finding container 4cc3bc033bc839a77de322761c882d25bd8322dcef4348540820abd1cd76038c: Status 404 returned error can't find the container with id 4cc3bc033bc839a77de322761c882d25bd8322dcef4348540820abd1cd76038c Jan 21 14:22:56 crc kubenswrapper[4765]: I0121 14:22:56.797090 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ss9hp" event={"ID":"1fab2097-84aa-4337-8ab1-6559dc30941c","Type":"ContainerStarted","Data":"4cc3bc033bc839a77de322761c882d25bd8322dcef4348540820abd1cd76038c"} Jan 21 14:22:57 crc kubenswrapper[4765]: I0121 14:22:57.809929 4765 generic.go:334] "Generic (PLEG): container finished" podID="1fab2097-84aa-4337-8ab1-6559dc30941c" containerID="7c378c53d1f1aa087c5dbd2dd6ee07a00f8d5c0966182c01bfb522cbda100112" exitCode=0 Jan 21 14:22:57 crc kubenswrapper[4765]: I0121 14:22:57.810093 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ss9hp" event={"ID":"1fab2097-84aa-4337-8ab1-6559dc30941c","Type":"ContainerDied","Data":"7c378c53d1f1aa087c5dbd2dd6ee07a00f8d5c0966182c01bfb522cbda100112"} Jan 21 14:22:58 crc kubenswrapper[4765]: I0121 14:22:58.820399 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ss9hp" event={"ID":"1fab2097-84aa-4337-8ab1-6559dc30941c","Type":"ContainerStarted","Data":"af794d3851bf5fc357e6d243d61f3adf42a02813d4903104fab2a2d9faeca09f"} Jan 21 14:22:59 crc kubenswrapper[4765]: I0121 14:22:59.741485 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:22:59 crc kubenswrapper[4765]: I0121 14:22:59.742283 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:22:59 crc kubenswrapper[4765]: I0121 14:22:59.858229 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:23:00 crc kubenswrapper[4765]: I0121 14:23:00.842339 4765 generic.go:334] "Generic (PLEG): container finished" podID="1fab2097-84aa-4337-8ab1-6559dc30941c" containerID="af794d3851bf5fc357e6d243d61f3adf42a02813d4903104fab2a2d9faeca09f" exitCode=0 Jan 21 14:23:00 crc kubenswrapper[4765]: I0121 14:23:00.843320 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ss9hp" event={"ID":"1fab2097-84aa-4337-8ab1-6559dc30941c","Type":"ContainerDied","Data":"af794d3851bf5fc357e6d243d61f3adf42a02813d4903104fab2a2d9faeca09f"} Jan 21 14:23:00 crc kubenswrapper[4765]: I0121 14:23:00.937455 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:23:01 crc kubenswrapper[4765]: I0121 14:23:01.853146 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ss9hp" event={"ID":"1fab2097-84aa-4337-8ab1-6559dc30941c","Type":"ContainerStarted","Data":"6d3c0b1342c8ffe27aa2ca705aabb44df842cc4f120e293cd6a47a5372ae2f84"} Jan 21 14:23:03 crc kubenswrapper[4765]: I0121 14:23:03.326642 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ss9hp" podStartSLOduration=4.875914231 podStartE2EDuration="8.326625145s" podCreationTimestamp="2026-01-21 14:22:55 +0000 UTC" firstStartedPulling="2026-01-21 14:22:57.814045345 +0000 UTC m=+4838.831771157" lastFinishedPulling="2026-01-21 14:23:01.264756249 +0000 UTC m=+4842.282482071" observedRunningTime="2026-01-21 14:23:01.895420713 +0000 UTC m=+4842.913146545" watchObservedRunningTime="2026-01-21 14:23:03.326625145 +0000 UTC m=+4844.344350967" Jan 21 14:23:03 crc kubenswrapper[4765]: I0121 14:23:03.331241 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tblmj"] Jan 21 14:23:03 crc kubenswrapper[4765]: I0121 14:23:03.331505 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tblmj" podUID="b2d12f6e-c08e-48c4-9e9d-d230c7013864" containerName="registry-server" containerID="cri-o://258b3f9efda89897b85b2597edb5683b9b1a8563cbe82b363607ea5f64330f88" gracePeriod=2 Jan 21 14:23:03 crc kubenswrapper[4765]: I0121 14:23:03.878680 4765 generic.go:334] "Generic (PLEG): container finished" podID="b2d12f6e-c08e-48c4-9e9d-d230c7013864" containerID="258b3f9efda89897b85b2597edb5683b9b1a8563cbe82b363607ea5f64330f88" exitCode=0 Jan 21 14:23:03 crc kubenswrapper[4765]: I0121 14:23:03.879101 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tblmj" event={"ID":"b2d12f6e-c08e-48c4-9e9d-d230c7013864","Type":"ContainerDied","Data":"258b3f9efda89897b85b2597edb5683b9b1a8563cbe82b363607ea5f64330f88"} Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.490398 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.622933 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d12f6e-c08e-48c4-9e9d-d230c7013864-catalog-content\") pod \"b2d12f6e-c08e-48c4-9e9d-d230c7013864\" (UID: \"b2d12f6e-c08e-48c4-9e9d-d230c7013864\") " Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.622990 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d12f6e-c08e-48c4-9e9d-d230c7013864-utilities\") pod \"b2d12f6e-c08e-48c4-9e9d-d230c7013864\" (UID: \"b2d12f6e-c08e-48c4-9e9d-d230c7013864\") " Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.623021 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ttz6\" (UniqueName: \"kubernetes.io/projected/b2d12f6e-c08e-48c4-9e9d-d230c7013864-kube-api-access-4ttz6\") pod \"b2d12f6e-c08e-48c4-9e9d-d230c7013864\" (UID: \"b2d12f6e-c08e-48c4-9e9d-d230c7013864\") " Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.625238 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2d12f6e-c08e-48c4-9e9d-d230c7013864-utilities" (OuterVolumeSpecName: "utilities") pod "b2d12f6e-c08e-48c4-9e9d-d230c7013864" (UID: "b2d12f6e-c08e-48c4-9e9d-d230c7013864"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.633293 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d12f6e-c08e-48c4-9e9d-d230c7013864-kube-api-access-4ttz6" (OuterVolumeSpecName: "kube-api-access-4ttz6") pod "b2d12f6e-c08e-48c4-9e9d-d230c7013864" (UID: "b2d12f6e-c08e-48c4-9e9d-d230c7013864"). InnerVolumeSpecName "kube-api-access-4ttz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.647638 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2d12f6e-c08e-48c4-9e9d-d230c7013864-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2d12f6e-c08e-48c4-9e9d-d230c7013864" (UID: "b2d12f6e-c08e-48c4-9e9d-d230c7013864"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.725526 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ttz6\" (UniqueName: \"kubernetes.io/projected/b2d12f6e-c08e-48c4-9e9d-d230c7013864-kube-api-access-4ttz6\") on node \"crc\" DevicePath \"\"" Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.725560 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d12f6e-c08e-48c4-9e9d-d230c7013864-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.725571 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d12f6e-c08e-48c4-9e9d-d230c7013864-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.890178 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tblmj" event={"ID":"b2d12f6e-c08e-48c4-9e9d-d230c7013864","Type":"ContainerDied","Data":"bda51e8c7fbfb3ab32ce7646fd99f40cd2b5b6a13c52573d97a67d26ffd6c9df"} Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.890457 4765 scope.go:117] "RemoveContainer" containerID="258b3f9efda89897b85b2597edb5683b9b1a8563cbe82b363607ea5f64330f88" Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.890615 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tblmj" Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.932092 4765 scope.go:117] "RemoveContainer" containerID="b1f554c9d97e849826af9a8b3ddf0307259ac06bd6b1fea76b0fb762c7188676" Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.942459 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tblmj"] Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.952745 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tblmj"] Jan 21 14:23:04 crc kubenswrapper[4765]: I0121 14:23:04.952903 4765 scope.go:117] "RemoveContainer" containerID="df009560927e8415314e57ea42fd58baca7480e27ee724c5984e352840985221" Jan 21 14:23:05 crc kubenswrapper[4765]: I0121 14:23:05.463829 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:23:05 crc kubenswrapper[4765]: I0121 14:23:05.463877 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:23:05 crc kubenswrapper[4765]: I0121 14:23:05.626293 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2d12f6e-c08e-48c4-9e9d-d230c7013864" path="/var/lib/kubelet/pods/b2d12f6e-c08e-48c4-9e9d-d230c7013864/volumes" Jan 21 14:23:06 crc kubenswrapper[4765]: I0121 14:23:06.531406 4765 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ss9hp" podUID="1fab2097-84aa-4337-8ab1-6559dc30941c" containerName="registry-server" probeResult="failure" output=< Jan 21 14:23:06 crc kubenswrapper[4765]: timeout: failed to connect service ":50051" within 1s Jan 21 14:23:06 crc kubenswrapper[4765]: > Jan 21 14:23:14 crc kubenswrapper[4765]: I0121 14:23:14.445692 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:23:14 crc kubenswrapper[4765]: I0121 14:23:14.445944 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:23:15 crc kubenswrapper[4765]: I0121 14:23:15.513617 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:23:15 crc kubenswrapper[4765]: I0121 14:23:15.581839 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:23:15 crc kubenswrapper[4765]: I0121 14:23:15.756359 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ss9hp"] Jan 21 14:23:17 crc kubenswrapper[4765]: I0121 14:23:17.175499 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ss9hp" podUID="1fab2097-84aa-4337-8ab1-6559dc30941c" containerName="registry-server" containerID="cri-o://6d3c0b1342c8ffe27aa2ca705aabb44df842cc4f120e293cd6a47a5372ae2f84" gracePeriod=2 Jan 21 14:23:17 crc kubenswrapper[4765]: I0121 14:23:17.665410 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:23:17 crc kubenswrapper[4765]: I0121 14:23:17.803232 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fab2097-84aa-4337-8ab1-6559dc30941c-utilities\") pod \"1fab2097-84aa-4337-8ab1-6559dc30941c\" (UID: \"1fab2097-84aa-4337-8ab1-6559dc30941c\") " Jan 21 14:23:17 crc kubenswrapper[4765]: I0121 14:23:17.803601 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fab2097-84aa-4337-8ab1-6559dc30941c-catalog-content\") pod \"1fab2097-84aa-4337-8ab1-6559dc30941c\" (UID: \"1fab2097-84aa-4337-8ab1-6559dc30941c\") " Jan 21 14:23:17 crc kubenswrapper[4765]: I0121 14:23:17.803652 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf2b2\" (UniqueName: \"kubernetes.io/projected/1fab2097-84aa-4337-8ab1-6559dc30941c-kube-api-access-wf2b2\") pod \"1fab2097-84aa-4337-8ab1-6559dc30941c\" (UID: \"1fab2097-84aa-4337-8ab1-6559dc30941c\") " Jan 21 14:23:17 crc kubenswrapper[4765]: I0121 14:23:17.804821 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fab2097-84aa-4337-8ab1-6559dc30941c-utilities" (OuterVolumeSpecName: "utilities") pod "1fab2097-84aa-4337-8ab1-6559dc30941c" (UID: "1fab2097-84aa-4337-8ab1-6559dc30941c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:23:17 crc kubenswrapper[4765]: I0121 14:23:17.819394 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fab2097-84aa-4337-8ab1-6559dc30941c-kube-api-access-wf2b2" (OuterVolumeSpecName: "kube-api-access-wf2b2") pod "1fab2097-84aa-4337-8ab1-6559dc30941c" (UID: "1fab2097-84aa-4337-8ab1-6559dc30941c"). InnerVolumeSpecName "kube-api-access-wf2b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:23:17 crc kubenswrapper[4765]: I0121 14:23:17.857367 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fab2097-84aa-4337-8ab1-6559dc30941c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fab2097-84aa-4337-8ab1-6559dc30941c" (UID: "1fab2097-84aa-4337-8ab1-6559dc30941c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:23:17 crc kubenswrapper[4765]: I0121 14:23:17.906257 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf2b2\" (UniqueName: \"kubernetes.io/projected/1fab2097-84aa-4337-8ab1-6559dc30941c-kube-api-access-wf2b2\") on node \"crc\" DevicePath \"\"" Jan 21 14:23:17 crc kubenswrapper[4765]: I0121 14:23:17.906290 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fab2097-84aa-4337-8ab1-6559dc30941c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:23:17 crc kubenswrapper[4765]: I0121 14:23:17.906301 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fab2097-84aa-4337-8ab1-6559dc30941c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:23:18 crc kubenswrapper[4765]: I0121 14:23:18.186642 4765 generic.go:334] "Generic (PLEG): container finished" podID="1fab2097-84aa-4337-8ab1-6559dc30941c" containerID="6d3c0b1342c8ffe27aa2ca705aabb44df842cc4f120e293cd6a47a5372ae2f84" exitCode=0 Jan 21 14:23:18 crc kubenswrapper[4765]: I0121 14:23:18.186700 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ss9hp" event={"ID":"1fab2097-84aa-4337-8ab1-6559dc30941c","Type":"ContainerDied","Data":"6d3c0b1342c8ffe27aa2ca705aabb44df842cc4f120e293cd6a47a5372ae2f84"} Jan 21 14:23:18 crc kubenswrapper[4765]: I0121 14:23:18.186949 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ss9hp" event={"ID":"1fab2097-84aa-4337-8ab1-6559dc30941c","Type":"ContainerDied","Data":"4cc3bc033bc839a77de322761c882d25bd8322dcef4348540820abd1cd76038c"} Jan 21 14:23:18 crc kubenswrapper[4765]: I0121 14:23:18.186973 4765 scope.go:117] "RemoveContainer" containerID="6d3c0b1342c8ffe27aa2ca705aabb44df842cc4f120e293cd6a47a5372ae2f84" Jan 21 14:23:18 crc kubenswrapper[4765]: I0121 14:23:18.187073 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ss9hp" Jan 21 14:23:18 crc kubenswrapper[4765]: I0121 14:23:18.225871 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ss9hp"] Jan 21 14:23:18 crc kubenswrapper[4765]: I0121 14:23:18.231577 4765 scope.go:117] "RemoveContainer" containerID="af794d3851bf5fc357e6d243d61f3adf42a02813d4903104fab2a2d9faeca09f" Jan 21 14:23:18 crc kubenswrapper[4765]: I0121 14:23:18.240108 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ss9hp"] Jan 21 14:23:18 crc kubenswrapper[4765]: I0121 14:23:18.255600 4765 scope.go:117] "RemoveContainer" containerID="7c378c53d1f1aa087c5dbd2dd6ee07a00f8d5c0966182c01bfb522cbda100112" Jan 21 14:23:18 crc kubenswrapper[4765]: I0121 14:23:18.288843 4765 scope.go:117] "RemoveContainer" containerID="6d3c0b1342c8ffe27aa2ca705aabb44df842cc4f120e293cd6a47a5372ae2f84" Jan 21 14:23:18 crc kubenswrapper[4765]: E0121 14:23:18.289342 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d3c0b1342c8ffe27aa2ca705aabb44df842cc4f120e293cd6a47a5372ae2f84\": container with ID starting with 6d3c0b1342c8ffe27aa2ca705aabb44df842cc4f120e293cd6a47a5372ae2f84 not found: ID does not exist" containerID="6d3c0b1342c8ffe27aa2ca705aabb44df842cc4f120e293cd6a47a5372ae2f84" Jan 21 14:23:18 crc kubenswrapper[4765]: I0121 14:23:18.289375 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3c0b1342c8ffe27aa2ca705aabb44df842cc4f120e293cd6a47a5372ae2f84"} err="failed to get container status \"6d3c0b1342c8ffe27aa2ca705aabb44df842cc4f120e293cd6a47a5372ae2f84\": rpc error: code = NotFound desc = could not find container \"6d3c0b1342c8ffe27aa2ca705aabb44df842cc4f120e293cd6a47a5372ae2f84\": container with ID starting with 6d3c0b1342c8ffe27aa2ca705aabb44df842cc4f120e293cd6a47a5372ae2f84 not found: ID does not exist" Jan 21 14:23:18 crc kubenswrapper[4765]: I0121 14:23:18.289403 4765 scope.go:117] "RemoveContainer" containerID="af794d3851bf5fc357e6d243d61f3adf42a02813d4903104fab2a2d9faeca09f" Jan 21 14:23:18 crc kubenswrapper[4765]: E0121 14:23:18.289853 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af794d3851bf5fc357e6d243d61f3adf42a02813d4903104fab2a2d9faeca09f\": container with ID starting with af794d3851bf5fc357e6d243d61f3adf42a02813d4903104fab2a2d9faeca09f not found: ID does not exist" containerID="af794d3851bf5fc357e6d243d61f3adf42a02813d4903104fab2a2d9faeca09f" Jan 21 14:23:18 crc kubenswrapper[4765]: I0121 14:23:18.289874 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af794d3851bf5fc357e6d243d61f3adf42a02813d4903104fab2a2d9faeca09f"} err="failed to get container status \"af794d3851bf5fc357e6d243d61f3adf42a02813d4903104fab2a2d9faeca09f\": rpc error: code = NotFound desc = could not find container \"af794d3851bf5fc357e6d243d61f3adf42a02813d4903104fab2a2d9faeca09f\": container with ID starting with af794d3851bf5fc357e6d243d61f3adf42a02813d4903104fab2a2d9faeca09f not found: ID does not exist" Jan 21 14:23:18 crc kubenswrapper[4765]: I0121 14:23:18.289891 4765 scope.go:117] "RemoveContainer" containerID="7c378c53d1f1aa087c5dbd2dd6ee07a00f8d5c0966182c01bfb522cbda100112" Jan 21 14:23:18 crc kubenswrapper[4765]: E0121 14:23:18.290147 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c378c53d1f1aa087c5dbd2dd6ee07a00f8d5c0966182c01bfb522cbda100112\": container with ID starting with 7c378c53d1f1aa087c5dbd2dd6ee07a00f8d5c0966182c01bfb522cbda100112 not found: ID does not exist" containerID="7c378c53d1f1aa087c5dbd2dd6ee07a00f8d5c0966182c01bfb522cbda100112" Jan 21 14:23:18 crc kubenswrapper[4765]: I0121 14:23:18.290175 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c378c53d1f1aa087c5dbd2dd6ee07a00f8d5c0966182c01bfb522cbda100112"} err="failed to get container status \"7c378c53d1f1aa087c5dbd2dd6ee07a00f8d5c0966182c01bfb522cbda100112\": rpc error: code = NotFound desc = could not find container \"7c378c53d1f1aa087c5dbd2dd6ee07a00f8d5c0966182c01bfb522cbda100112\": container with ID starting with 7c378c53d1f1aa087c5dbd2dd6ee07a00f8d5c0966182c01bfb522cbda100112 not found: ID does not exist" Jan 21 14:23:19 crc kubenswrapper[4765]: I0121 14:23:19.625083 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fab2097-84aa-4337-8ab1-6559dc30941c" path="/var/lib/kubelet/pods/1fab2097-84aa-4337-8ab1-6559dc30941c/volumes" Jan 21 14:23:44 crc kubenswrapper[4765]: I0121 14:23:44.445877 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:23:44 crc kubenswrapper[4765]: I0121 14:23:44.446468 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:24:09 crc kubenswrapper[4765]: I0121 14:24:09.841534 4765 generic.go:334] "Generic (PLEG): container finished" podID="56d89599-1283-4f0e-a1da-c2ffeff901d5" containerID="da3ecdbe9e8c17406dbf0a6f0a0a752e7854e6984bf663901883eba4c8c18a17" exitCode=0 Jan 21 14:24:09 crc kubenswrapper[4765]: I0121 14:24:09.841577 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvz4/must-gather-hgz6q" event={"ID":"56d89599-1283-4f0e-a1da-c2ffeff901d5","Type":"ContainerDied","Data":"da3ecdbe9e8c17406dbf0a6f0a0a752e7854e6984bf663901883eba4c8c18a17"} Jan 21 14:24:09 crc kubenswrapper[4765]: I0121 14:24:09.842590 4765 scope.go:117] "RemoveContainer" containerID="da3ecdbe9e8c17406dbf0a6f0a0a752e7854e6984bf663901883eba4c8c18a17" Jan 21 14:24:10 crc kubenswrapper[4765]: I0121 14:24:10.829643 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rbvz4_must-gather-hgz6q_56d89599-1283-4f0e-a1da-c2ffeff901d5/gather/0.log" Jan 21 14:24:14 crc kubenswrapper[4765]: I0121 14:24:14.446066 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:24:14 crc kubenswrapper[4765]: I0121 14:24:14.447398 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:24:14 crc kubenswrapper[4765]: I0121 14:24:14.447470 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 14:24:14 crc kubenswrapper[4765]: I0121 14:24:14.448411 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e4cffd9a21b0db7b2979a75be9cd7daae21d859411255361a9373987f2a69d5"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:24:14 crc kubenswrapper[4765]: I0121 14:24:14.448492 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://2e4cffd9a21b0db7b2979a75be9cd7daae21d859411255361a9373987f2a69d5" gracePeriod=600 Jan 21 14:24:14 crc kubenswrapper[4765]: I0121 14:24:14.887473 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="2e4cffd9a21b0db7b2979a75be9cd7daae21d859411255361a9373987f2a69d5" exitCode=0 Jan 21 14:24:14 crc kubenswrapper[4765]: I0121 14:24:14.887557 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"2e4cffd9a21b0db7b2979a75be9cd7daae21d859411255361a9373987f2a69d5"} Jan 21 14:24:14 crc kubenswrapper[4765]: I0121 14:24:14.888137 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerStarted","Data":"30a9a7a90758d3f672ff2f423dd8c7d115179e30ea3b3b9f1ac35d2328558f8f"} Jan 21 14:24:14 crc kubenswrapper[4765]: I0121 14:24:14.888170 4765 scope.go:117] "RemoveContainer" containerID="221c278d6bf8cfec68c2d82cc54b8dd85dd3fa902bf5c02751079ad3a96d8435" Jan 21 14:24:20 crc kubenswrapper[4765]: I0121 14:24:20.592917 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rbvz4/must-gather-hgz6q"] Jan 21 14:24:20 crc kubenswrapper[4765]: I0121 14:24:20.595898 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rbvz4/must-gather-hgz6q" podUID="56d89599-1283-4f0e-a1da-c2ffeff901d5" containerName="copy" containerID="cri-o://d1eb4cc9b8433b32f6ed9e5d5b6088df7b023b0dc889f6dbd2da78e6744d42aa" gracePeriod=2 Jan 21 14:24:20 crc kubenswrapper[4765]: I0121 14:24:20.608088 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rbvz4/must-gather-hgz6q"] Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.052663 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rbvz4_must-gather-hgz6q_56d89599-1283-4f0e-a1da-c2ffeff901d5/copy/0.log" Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.054162 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/must-gather-hgz6q" Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.087497 4765 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rbvz4_must-gather-hgz6q_56d89599-1283-4f0e-a1da-c2ffeff901d5/copy/0.log" Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.087845 4765 generic.go:334] "Generic (PLEG): container finished" podID="56d89599-1283-4f0e-a1da-c2ffeff901d5" containerID="d1eb4cc9b8433b32f6ed9e5d5b6088df7b023b0dc889f6dbd2da78e6744d42aa" exitCode=143 Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.087921 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvz4/must-gather-hgz6q" Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.087923 4765 scope.go:117] "RemoveContainer" containerID="d1eb4cc9b8433b32f6ed9e5d5b6088df7b023b0dc889f6dbd2da78e6744d42aa" Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.113542 4765 scope.go:117] "RemoveContainer" containerID="da3ecdbe9e8c17406dbf0a6f0a0a752e7854e6984bf663901883eba4c8c18a17" Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.125325 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56d89599-1283-4f0e-a1da-c2ffeff901d5-must-gather-output\") pod \"56d89599-1283-4f0e-a1da-c2ffeff901d5\" (UID: \"56d89599-1283-4f0e-a1da-c2ffeff901d5\") " Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.125431 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb2tn\" (UniqueName: \"kubernetes.io/projected/56d89599-1283-4f0e-a1da-c2ffeff901d5-kube-api-access-jb2tn\") pod \"56d89599-1283-4f0e-a1da-c2ffeff901d5\" (UID: \"56d89599-1283-4f0e-a1da-c2ffeff901d5\") " Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.134506 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d89599-1283-4f0e-a1da-c2ffeff901d5-kube-api-access-jb2tn" (OuterVolumeSpecName: "kube-api-access-jb2tn") pod "56d89599-1283-4f0e-a1da-c2ffeff901d5" (UID: "56d89599-1283-4f0e-a1da-c2ffeff901d5"). InnerVolumeSpecName "kube-api-access-jb2tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.171784 4765 scope.go:117] "RemoveContainer" containerID="d1eb4cc9b8433b32f6ed9e5d5b6088df7b023b0dc889f6dbd2da78e6744d42aa" Jan 21 14:24:21 crc kubenswrapper[4765]: E0121 14:24:21.172541 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1eb4cc9b8433b32f6ed9e5d5b6088df7b023b0dc889f6dbd2da78e6744d42aa\": container with ID starting with d1eb4cc9b8433b32f6ed9e5d5b6088df7b023b0dc889f6dbd2da78e6744d42aa not found: ID does not exist" containerID="d1eb4cc9b8433b32f6ed9e5d5b6088df7b023b0dc889f6dbd2da78e6744d42aa" Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.172591 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1eb4cc9b8433b32f6ed9e5d5b6088df7b023b0dc889f6dbd2da78e6744d42aa"} err="failed to get container status \"d1eb4cc9b8433b32f6ed9e5d5b6088df7b023b0dc889f6dbd2da78e6744d42aa\": rpc error: code = NotFound desc = could not find container \"d1eb4cc9b8433b32f6ed9e5d5b6088df7b023b0dc889f6dbd2da78e6744d42aa\": container with ID starting with d1eb4cc9b8433b32f6ed9e5d5b6088df7b023b0dc889f6dbd2da78e6744d42aa not found: ID does not exist" Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.172628 4765 scope.go:117] "RemoveContainer" containerID="da3ecdbe9e8c17406dbf0a6f0a0a752e7854e6984bf663901883eba4c8c18a17" Jan 21 14:24:21 crc kubenswrapper[4765]: E0121 14:24:21.173023 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da3ecdbe9e8c17406dbf0a6f0a0a752e7854e6984bf663901883eba4c8c18a17\": container with ID starting with da3ecdbe9e8c17406dbf0a6f0a0a752e7854e6984bf663901883eba4c8c18a17 not found: ID does not exist" containerID="da3ecdbe9e8c17406dbf0a6f0a0a752e7854e6984bf663901883eba4c8c18a17" Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.173139 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da3ecdbe9e8c17406dbf0a6f0a0a752e7854e6984bf663901883eba4c8c18a17"} err="failed to get container status \"da3ecdbe9e8c17406dbf0a6f0a0a752e7854e6984bf663901883eba4c8c18a17\": rpc error: code = NotFound desc = could not find container \"da3ecdbe9e8c17406dbf0a6f0a0a752e7854e6984bf663901883eba4c8c18a17\": container with ID starting with da3ecdbe9e8c17406dbf0a6f0a0a752e7854e6984bf663901883eba4c8c18a17 not found: ID does not exist" Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.228728 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb2tn\" (UniqueName: \"kubernetes.io/projected/56d89599-1283-4f0e-a1da-c2ffeff901d5-kube-api-access-jb2tn\") on node \"crc\" DevicePath \"\"" Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.348971 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56d89599-1283-4f0e-a1da-c2ffeff901d5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "56d89599-1283-4f0e-a1da-c2ffeff901d5" (UID: "56d89599-1283-4f0e-a1da-c2ffeff901d5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.433663 4765 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/56d89599-1283-4f0e-a1da-c2ffeff901d5-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 14:24:21 crc kubenswrapper[4765]: I0121 14:24:21.624638 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d89599-1283-4f0e-a1da-c2ffeff901d5" path="/var/lib/kubelet/pods/56d89599-1283-4f0e-a1da-c2ffeff901d5/volumes" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.471572 4765 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tmtbz"] Jan 21 14:24:34 crc kubenswrapper[4765]: E0121 14:24:34.472468 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fab2097-84aa-4337-8ab1-6559dc30941c" containerName="registry-server" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.472487 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fab2097-84aa-4337-8ab1-6559dc30941c" containerName="registry-server" Jan 21 14:24:34 crc kubenswrapper[4765]: E0121 14:24:34.472499 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fab2097-84aa-4337-8ab1-6559dc30941c" containerName="extract-content" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.472509 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fab2097-84aa-4337-8ab1-6559dc30941c" containerName="extract-content" Jan 21 14:24:34 crc kubenswrapper[4765]: E0121 14:24:34.472523 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d12f6e-c08e-48c4-9e9d-d230c7013864" containerName="extract-content" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.472531 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d12f6e-c08e-48c4-9e9d-d230c7013864" containerName="extract-content" Jan 21 14:24:34 crc kubenswrapper[4765]: E0121 14:24:34.472558 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d89599-1283-4f0e-a1da-c2ffeff901d5" containerName="gather" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.472566 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d89599-1283-4f0e-a1da-c2ffeff901d5" containerName="gather" Jan 21 14:24:34 crc kubenswrapper[4765]: E0121 14:24:34.472581 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d12f6e-c08e-48c4-9e9d-d230c7013864" containerName="extract-utilities" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.472589 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d12f6e-c08e-48c4-9e9d-d230c7013864" containerName="extract-utilities" Jan 21 14:24:34 crc kubenswrapper[4765]: E0121 14:24:34.472601 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d12f6e-c08e-48c4-9e9d-d230c7013864" containerName="registry-server" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.472607 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d12f6e-c08e-48c4-9e9d-d230c7013864" containerName="registry-server" Jan 21 14:24:34 crc kubenswrapper[4765]: E0121 14:24:34.472621 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d89599-1283-4f0e-a1da-c2ffeff901d5" containerName="copy" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.472627 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d89599-1283-4f0e-a1da-c2ffeff901d5" containerName="copy" Jan 21 14:24:34 crc kubenswrapper[4765]: E0121 14:24:34.472636 4765 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fab2097-84aa-4337-8ab1-6559dc30941c" containerName="extract-utilities" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.472642 4765 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fab2097-84aa-4337-8ab1-6559dc30941c" containerName="extract-utilities" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.472813 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d89599-1283-4f0e-a1da-c2ffeff901d5" containerName="copy" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.472830 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d12f6e-c08e-48c4-9e9d-d230c7013864" containerName="registry-server" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.472841 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fab2097-84aa-4337-8ab1-6559dc30941c" containerName="registry-server" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.472858 4765 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d89599-1283-4f0e-a1da-c2ffeff901d5" containerName="gather" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.474681 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.486960 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tmtbz"] Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.606661 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9dl7\" (UniqueName: \"kubernetes.io/projected/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-kube-api-access-n9dl7\") pod \"certified-operators-tmtbz\" (UID: \"d935c8c9-dc78-450a-abb0-4fa27c7bd58f\") " pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.606765 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-utilities\") pod \"certified-operators-tmtbz\" (UID: \"d935c8c9-dc78-450a-abb0-4fa27c7bd58f\") " pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.606849 4765 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-catalog-content\") pod \"certified-operators-tmtbz\" (UID: \"d935c8c9-dc78-450a-abb0-4fa27c7bd58f\") " pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.709045 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9dl7\" (UniqueName: \"kubernetes.io/projected/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-kube-api-access-n9dl7\") pod \"certified-operators-tmtbz\" (UID: \"d935c8c9-dc78-450a-abb0-4fa27c7bd58f\") " pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.709185 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-utilities\") pod \"certified-operators-tmtbz\" (UID: \"d935c8c9-dc78-450a-abb0-4fa27c7bd58f\") " pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.709317 4765 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-catalog-content\") pod \"certified-operators-tmtbz\" (UID: \"d935c8c9-dc78-450a-abb0-4fa27c7bd58f\") " pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.709801 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-utilities\") pod \"certified-operators-tmtbz\" (UID: \"d935c8c9-dc78-450a-abb0-4fa27c7bd58f\") " pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.709853 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-catalog-content\") pod \"certified-operators-tmtbz\" (UID: \"d935c8c9-dc78-450a-abb0-4fa27c7bd58f\") " pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:34 crc kubenswrapper[4765]: I0121 14:24:34.883333 4765 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9dl7\" (UniqueName: \"kubernetes.io/projected/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-kube-api-access-n9dl7\") pod \"certified-operators-tmtbz\" (UID: \"d935c8c9-dc78-450a-abb0-4fa27c7bd58f\") " pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:35 crc kubenswrapper[4765]: I0121 14:24:35.104635 4765 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:35 crc kubenswrapper[4765]: I0121 14:24:35.678917 4765 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tmtbz"] Jan 21 14:24:36 crc kubenswrapper[4765]: I0121 14:24:36.246252 4765 generic.go:334] "Generic (PLEG): container finished" podID="d935c8c9-dc78-450a-abb0-4fa27c7bd58f" containerID="d49ef14c38b2e18a194ad7e266b16ee9d6218d7be26282c947c9d93a8d2dc47f" exitCode=0 Jan 21 14:24:36 crc kubenswrapper[4765]: I0121 14:24:36.246316 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmtbz" event={"ID":"d935c8c9-dc78-450a-abb0-4fa27c7bd58f","Type":"ContainerDied","Data":"d49ef14c38b2e18a194ad7e266b16ee9d6218d7be26282c947c9d93a8d2dc47f"} Jan 21 14:24:36 crc kubenswrapper[4765]: I0121 14:24:36.246365 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmtbz" event={"ID":"d935c8c9-dc78-450a-abb0-4fa27c7bd58f","Type":"ContainerStarted","Data":"2bb48a8243b401a30459f8c157850bea4a008c6fe688656d92205f4042927ff8"} Jan 21 14:24:37 crc kubenswrapper[4765]: I0121 14:24:37.255575 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmtbz" event={"ID":"d935c8c9-dc78-450a-abb0-4fa27c7bd58f","Type":"ContainerStarted","Data":"e6ff8c89f403676dc17487156553546ba5eb9d0cd77d05a6f703ae1ef83ceb00"} Jan 21 14:24:38 crc kubenswrapper[4765]: I0121 14:24:38.264990 4765 generic.go:334] "Generic (PLEG): container finished" podID="d935c8c9-dc78-450a-abb0-4fa27c7bd58f" containerID="e6ff8c89f403676dc17487156553546ba5eb9d0cd77d05a6f703ae1ef83ceb00" exitCode=0 Jan 21 14:24:38 crc kubenswrapper[4765]: I0121 14:24:38.265056 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmtbz" event={"ID":"d935c8c9-dc78-450a-abb0-4fa27c7bd58f","Type":"ContainerDied","Data":"e6ff8c89f403676dc17487156553546ba5eb9d0cd77d05a6f703ae1ef83ceb00"} Jan 21 14:24:39 crc kubenswrapper[4765]: I0121 14:24:39.274480 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmtbz" event={"ID":"d935c8c9-dc78-450a-abb0-4fa27c7bd58f","Type":"ContainerStarted","Data":"8c2e22bd5249e3f568b32e59eea23fc5eff965aea3832f98b748d55c986f4620"} Jan 21 14:24:39 crc kubenswrapper[4765]: I0121 14:24:39.305830 4765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tmtbz" podStartSLOduration=2.864783385 podStartE2EDuration="5.305814283s" podCreationTimestamp="2026-01-21 14:24:34 +0000 UTC" firstStartedPulling="2026-01-21 14:24:36.250575143 +0000 UTC m=+4937.268300965" lastFinishedPulling="2026-01-21 14:24:38.691606041 +0000 UTC m=+4939.709331863" observedRunningTime="2026-01-21 14:24:39.297974652 +0000 UTC m=+4940.315700474" watchObservedRunningTime="2026-01-21 14:24:39.305814283 +0000 UTC m=+4940.323540105" Jan 21 14:24:45 crc kubenswrapper[4765]: I0121 14:24:45.104828 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:45 crc kubenswrapper[4765]: I0121 14:24:45.105357 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:45 crc kubenswrapper[4765]: I0121 14:24:45.219370 4765 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:45 crc kubenswrapper[4765]: I0121 14:24:45.380533 4765 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:45 crc kubenswrapper[4765]: I0121 14:24:45.454319 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tmtbz"] Jan 21 14:24:47 crc kubenswrapper[4765]: I0121 14:24:47.353141 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tmtbz" podUID="d935c8c9-dc78-450a-abb0-4fa27c7bd58f" containerName="registry-server" containerID="cri-o://8c2e22bd5249e3f568b32e59eea23fc5eff965aea3832f98b748d55c986f4620" gracePeriod=2 Jan 21 14:24:47 crc kubenswrapper[4765]: I0121 14:24:47.824980 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:47 crc kubenswrapper[4765]: I0121 14:24:47.985145 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-catalog-content\") pod \"d935c8c9-dc78-450a-abb0-4fa27c7bd58f\" (UID: \"d935c8c9-dc78-450a-abb0-4fa27c7bd58f\") " Jan 21 14:24:47 crc kubenswrapper[4765]: I0121 14:24:47.985312 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9dl7\" (UniqueName: \"kubernetes.io/projected/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-kube-api-access-n9dl7\") pod \"d935c8c9-dc78-450a-abb0-4fa27c7bd58f\" (UID: \"d935c8c9-dc78-450a-abb0-4fa27c7bd58f\") " Jan 21 14:24:47 crc kubenswrapper[4765]: I0121 14:24:47.986309 4765 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-utilities\") pod \"d935c8c9-dc78-450a-abb0-4fa27c7bd58f\" (UID: \"d935c8c9-dc78-450a-abb0-4fa27c7bd58f\") " Jan 21 14:24:47 crc kubenswrapper[4765]: I0121 14:24:47.987317 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-utilities" (OuterVolumeSpecName: "utilities") pod "d935c8c9-dc78-450a-abb0-4fa27c7bd58f" (UID: "d935c8c9-dc78-450a-abb0-4fa27c7bd58f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:24:47 crc kubenswrapper[4765]: I0121 14:24:47.996416 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-kube-api-access-n9dl7" (OuterVolumeSpecName: "kube-api-access-n9dl7") pod "d935c8c9-dc78-450a-abb0-4fa27c7bd58f" (UID: "d935c8c9-dc78-450a-abb0-4fa27c7bd58f"). InnerVolumeSpecName "kube-api-access-n9dl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.037162 4765 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d935c8c9-dc78-450a-abb0-4fa27c7bd58f" (UID: "d935c8c9-dc78-450a-abb0-4fa27c7bd58f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.087844 4765 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.087881 4765 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9dl7\" (UniqueName: \"kubernetes.io/projected/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-kube-api-access-n9dl7\") on node \"crc\" DevicePath \"\"" Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.087891 4765 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d935c8c9-dc78-450a-abb0-4fa27c7bd58f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.364411 4765 generic.go:334] "Generic (PLEG): container finished" podID="d935c8c9-dc78-450a-abb0-4fa27c7bd58f" containerID="8c2e22bd5249e3f568b32e59eea23fc5eff965aea3832f98b748d55c986f4620" exitCode=0 Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.364467 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmtbz" event={"ID":"d935c8c9-dc78-450a-abb0-4fa27c7bd58f","Type":"ContainerDied","Data":"8c2e22bd5249e3f568b32e59eea23fc5eff965aea3832f98b748d55c986f4620"} Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.364503 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmtbz" event={"ID":"d935c8c9-dc78-450a-abb0-4fa27c7bd58f","Type":"ContainerDied","Data":"2bb48a8243b401a30459f8c157850bea4a008c6fe688656d92205f4042927ff8"} Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.364510 4765 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmtbz" Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.364525 4765 scope.go:117] "RemoveContainer" containerID="8c2e22bd5249e3f568b32e59eea23fc5eff965aea3832f98b748d55c986f4620" Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.386788 4765 scope.go:117] "RemoveContainer" containerID="e6ff8c89f403676dc17487156553546ba5eb9d0cd77d05a6f703ae1ef83ceb00" Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.399359 4765 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tmtbz"] Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.407737 4765 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tmtbz"] Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.417381 4765 scope.go:117] "RemoveContainer" containerID="d49ef14c38b2e18a194ad7e266b16ee9d6218d7be26282c947c9d93a8d2dc47f" Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.455097 4765 scope.go:117] "RemoveContainer" containerID="8c2e22bd5249e3f568b32e59eea23fc5eff965aea3832f98b748d55c986f4620" Jan 21 14:24:48 crc kubenswrapper[4765]: E0121 14:24:48.455746 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2e22bd5249e3f568b32e59eea23fc5eff965aea3832f98b748d55c986f4620\": container with ID starting with 8c2e22bd5249e3f568b32e59eea23fc5eff965aea3832f98b748d55c986f4620 not found: ID does not exist" containerID="8c2e22bd5249e3f568b32e59eea23fc5eff965aea3832f98b748d55c986f4620" Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.455785 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2e22bd5249e3f568b32e59eea23fc5eff965aea3832f98b748d55c986f4620"} err="failed to get container status \"8c2e22bd5249e3f568b32e59eea23fc5eff965aea3832f98b748d55c986f4620\": rpc error: code = NotFound desc = could not find container \"8c2e22bd5249e3f568b32e59eea23fc5eff965aea3832f98b748d55c986f4620\": container with ID starting with 8c2e22bd5249e3f568b32e59eea23fc5eff965aea3832f98b748d55c986f4620 not found: ID does not exist" Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.455811 4765 scope.go:117] "RemoveContainer" containerID="e6ff8c89f403676dc17487156553546ba5eb9d0cd77d05a6f703ae1ef83ceb00" Jan 21 14:24:48 crc kubenswrapper[4765]: E0121 14:24:48.456475 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6ff8c89f403676dc17487156553546ba5eb9d0cd77d05a6f703ae1ef83ceb00\": container with ID starting with e6ff8c89f403676dc17487156553546ba5eb9d0cd77d05a6f703ae1ef83ceb00 not found: ID does not exist" containerID="e6ff8c89f403676dc17487156553546ba5eb9d0cd77d05a6f703ae1ef83ceb00" Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.456514 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ff8c89f403676dc17487156553546ba5eb9d0cd77d05a6f703ae1ef83ceb00"} err="failed to get container status \"e6ff8c89f403676dc17487156553546ba5eb9d0cd77d05a6f703ae1ef83ceb00\": rpc error: code = NotFound desc = could not find container \"e6ff8c89f403676dc17487156553546ba5eb9d0cd77d05a6f703ae1ef83ceb00\": container with ID starting with e6ff8c89f403676dc17487156553546ba5eb9d0cd77d05a6f703ae1ef83ceb00 not found: ID does not exist" Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.456544 4765 scope.go:117] "RemoveContainer" containerID="d49ef14c38b2e18a194ad7e266b16ee9d6218d7be26282c947c9d93a8d2dc47f" Jan 21 14:24:48 crc kubenswrapper[4765]: E0121 14:24:48.456898 4765 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d49ef14c38b2e18a194ad7e266b16ee9d6218d7be26282c947c9d93a8d2dc47f\": container with ID starting with d49ef14c38b2e18a194ad7e266b16ee9d6218d7be26282c947c9d93a8d2dc47f not found: ID does not exist" containerID="d49ef14c38b2e18a194ad7e266b16ee9d6218d7be26282c947c9d93a8d2dc47f" Jan 21 14:24:48 crc kubenswrapper[4765]: I0121 14:24:48.456929 4765 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d49ef14c38b2e18a194ad7e266b16ee9d6218d7be26282c947c9d93a8d2dc47f"} err="failed to get container status \"d49ef14c38b2e18a194ad7e266b16ee9d6218d7be26282c947c9d93a8d2dc47f\": rpc error: code = NotFound desc = could not find container \"d49ef14c38b2e18a194ad7e266b16ee9d6218d7be26282c947c9d93a8d2dc47f\": container with ID starting with d49ef14c38b2e18a194ad7e266b16ee9d6218d7be26282c947c9d93a8d2dc47f not found: ID does not exist" Jan 21 14:24:49 crc kubenswrapper[4765]: I0121 14:24:49.629726 4765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d935c8c9-dc78-450a-abb0-4fa27c7bd58f" path="/var/lib/kubelet/pods/d935c8c9-dc78-450a-abb0-4fa27c7bd58f/volumes" Jan 21 14:26:14 crc kubenswrapper[4765]: I0121 14:26:14.445541 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:26:14 crc kubenswrapper[4765]: I0121 14:26:14.445962 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:26:44 crc kubenswrapper[4765]: I0121 14:26:44.445594 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:26:44 crc kubenswrapper[4765]: I0121 14:26:44.446162 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:27:14 crc kubenswrapper[4765]: I0121 14:27:14.446333 4765 patch_prober.go:28] interesting pod/machine-config-daemon-v72nq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 14:27:14 crc kubenswrapper[4765]: I0121 14:27:14.446888 4765 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 14:27:14 crc kubenswrapper[4765]: I0121 14:27:14.446928 4765 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" Jan 21 14:27:14 crc kubenswrapper[4765]: I0121 14:27:14.447712 4765 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30a9a7a90758d3f672ff2f423dd8c7d115179e30ea3b3b9f1ac35d2328558f8f"} pod="openshift-machine-config-operator/machine-config-daemon-v72nq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 14:27:14 crc kubenswrapper[4765]: I0121 14:27:14.447765 4765 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" containerName="machine-config-daemon" containerID="cri-o://30a9a7a90758d3f672ff2f423dd8c7d115179e30ea3b3b9f1ac35d2328558f8f" gracePeriod=600 Jan 21 14:27:14 crc kubenswrapper[4765]: E0121 14:27:14.578314 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:27:14 crc kubenswrapper[4765]: I0121 14:27:14.709054 4765 generic.go:334] "Generic (PLEG): container finished" podID="e149390c-e4da-4dfd-bed2-b14de058f921" containerID="30a9a7a90758d3f672ff2f423dd8c7d115179e30ea3b3b9f1ac35d2328558f8f" exitCode=0 Jan 21 14:27:14 crc kubenswrapper[4765]: I0121 14:27:14.709124 4765 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" event={"ID":"e149390c-e4da-4dfd-bed2-b14de058f921","Type":"ContainerDied","Data":"30a9a7a90758d3f672ff2f423dd8c7d115179e30ea3b3b9f1ac35d2328558f8f"} Jan 21 14:27:14 crc kubenswrapper[4765]: I0121 14:27:14.709179 4765 scope.go:117] "RemoveContainer" containerID="2e4cffd9a21b0db7b2979a75be9cd7daae21d859411255361a9373987f2a69d5" Jan 21 14:27:14 crc kubenswrapper[4765]: I0121 14:27:14.709840 4765 scope.go:117] "RemoveContainer" containerID="30a9a7a90758d3f672ff2f423dd8c7d115179e30ea3b3b9f1ac35d2328558f8f" Jan 21 14:27:14 crc kubenswrapper[4765]: E0121 14:27:14.710121 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921" Jan 21 14:27:25 crc kubenswrapper[4765]: I0121 14:27:25.614464 4765 scope.go:117] "RemoveContainer" containerID="30a9a7a90758d3f672ff2f423dd8c7d115179e30ea3b3b9f1ac35d2328558f8f" Jan 21 14:27:25 crc kubenswrapper[4765]: E0121 14:27:25.615445 4765 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v72nq_openshift-machine-config-operator(e149390c-e4da-4dfd-bed2-b14de058f921)\"" pod="openshift-machine-config-operator/machine-config-daemon-v72nq" podUID="e149390c-e4da-4dfd-bed2-b14de058f921"